00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2454 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3719 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.037 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.038 The recommended git tool is: git 00:00:00.038 using credential 00000000-0000-0000-0000-000000000002 00:00:00.041 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.059 Fetching changes from the remote Git repository 00:00:00.060 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.082 Using shallow fetch with depth 1 00:00:00.082 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.082 > git --version # timeout=10 00:00:00.116 > git --version # 'git version 2.39.2' 00:00:00.116 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.168 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.168 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.127 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.139 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.152 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:03.152 > git config core.sparsecheckout # timeout=10 00:00:03.163 > git read-tree -mu HEAD # timeout=10 00:00:03.180 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.196 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.197 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:03.282 [Pipeline] Start of Pipeline 00:00:03.292 [Pipeline] library 00:00:03.293 Loading library shm_lib@master 00:00:03.293 Library shm_lib@master is cached. Copying from home. 00:00:03.308 [Pipeline] node 00:00:03.319 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:03.321 [Pipeline] { 00:00:03.330 [Pipeline] catchError 00:00:03.331 [Pipeline] { 00:00:03.340 [Pipeline] wrap 00:00:03.346 [Pipeline] { 00:00:03.351 [Pipeline] stage 00:00:03.352 [Pipeline] { (Prologue) 00:00:03.366 [Pipeline] echo 00:00:03.368 Node: VM-host-WFP7 00:00:03.374 [Pipeline] cleanWs 00:00:03.384 [WS-CLEANUP] Deleting project workspace... 00:00:03.384 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.391 [WS-CLEANUP] done 00:00:03.602 [Pipeline] setCustomBuildProperty 00:00:03.669 [Pipeline] httpRequest 00:00:03.973 [Pipeline] echo 00:00:03.975 Sorcerer 10.211.164.20 is alive 00:00:03.984 [Pipeline] retry 00:00:03.986 [Pipeline] { 00:00:03.997 [Pipeline] httpRequest 00:00:04.001 HttpMethod: GET 00:00:04.002 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.002 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.003 Response Code: HTTP/1.1 200 OK 00:00:04.003 Success: Status code 200 is in the accepted range: 200,404 00:00:04.004 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.150 [Pipeline] } 00:00:04.166 [Pipeline] // retry 00:00:04.171 [Pipeline] sh 00:00:04.451 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.465 [Pipeline] httpRequest 00:00:04.791 [Pipeline] echo 00:00:04.792 Sorcerer 10.211.164.20 is alive 00:00:04.800 [Pipeline] retry 00:00:04.801 [Pipeline] { 00:00:04.809 [Pipeline] httpRequest 00:00:04.814 HttpMethod: GET 00:00:04.814 URL: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:04.814 Sending request to url: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:04.816 Response Code: HTTP/1.1 200 OK 00:00:04.816 Success: Status code 200 is in the accepted range: 200,404 00:00:04.816 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:18.110 [Pipeline] } 00:00:18.128 [Pipeline] // retry 00:00:18.136 [Pipeline] sh 00:00:18.421 + tar --no-same-owner -xf spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:20.975 [Pipeline] sh 00:00:21.270 + git -C spdk log --oneline -n5 00:00:21.270 e01cb43b8 mk/spdk.common.mk sed the minor version 00:00:21.270 d58eef2a2 nvme/rdma: Fix reinserting qpair in connecting list after stale state 00:00:21.270 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:00:21.270 66289a6db build: use VERSION file for storing version 00:00:21.270 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:00:21.291 [Pipeline] withCredentials 00:00:21.302 > git --version # timeout=10 00:00:21.316 > git --version # 'git version 2.39.2' 00:00:21.335 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:21.336 [Pipeline] { 00:00:21.346 [Pipeline] retry 00:00:21.347 [Pipeline] { 00:00:21.362 [Pipeline] sh 00:00:21.647 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:21.920 [Pipeline] } 00:00:21.938 [Pipeline] // retry 00:00:21.943 [Pipeline] } 00:00:21.959 [Pipeline] // withCredentials 00:00:21.969 [Pipeline] httpRequest 00:00:22.285 [Pipeline] echo 00:00:22.287 Sorcerer 10.211.164.20 is alive 00:00:22.297 [Pipeline] retry 00:00:22.299 [Pipeline] { 00:00:22.313 [Pipeline] httpRequest 00:00:22.317 HttpMethod: GET 00:00:22.318 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:22.319 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:22.334 Response Code: HTTP/1.1 200 OK 00:00:22.335 Success: Status code 200 is in the accepted range: 200,404 00:00:22.335 Saving response body to /var/jenkins/workspace/raid-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:14.673 [Pipeline] } 00:01:14.687 [Pipeline] // retry 00:01:14.694 [Pipeline] sh 00:01:14.976 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:16.375 [Pipeline] sh 00:01:16.658 + git -C dpdk log --oneline -n5 00:01:16.658 caf0f5d395 version: 22.11.4 00:01:16.658 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:16.658 dc9c799c7d vhost: fix missing spinlock unlock 00:01:16.658 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:16.658 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:16.676 [Pipeline] writeFile 00:01:16.690 [Pipeline] sh 00:01:16.976 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:16.988 [Pipeline] sh 00:01:17.272 + cat autorun-spdk.conf 00:01:17.272 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.272 SPDK_RUN_ASAN=1 00:01:17.272 SPDK_RUN_UBSAN=1 00:01:17.272 SPDK_TEST_RAID=1 00:01:17.272 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:17.272 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:17.272 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:17.280 RUN_NIGHTLY=1 00:01:17.282 [Pipeline] } 00:01:17.294 [Pipeline] // stage 00:01:17.307 [Pipeline] stage 00:01:17.309 [Pipeline] { (Run VM) 00:01:17.321 [Pipeline] sh 00:01:17.604 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:17.604 + echo 'Start stage prepare_nvme.sh' 00:01:17.604 Start stage prepare_nvme.sh 00:01:17.604 + [[ -n 1 ]] 00:01:17.604 + disk_prefix=ex1 00:01:17.604 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:01:17.604 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:01:17.604 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:01:17.604 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.604 ++ SPDK_RUN_ASAN=1 00:01:17.604 ++ SPDK_RUN_UBSAN=1 00:01:17.604 ++ SPDK_TEST_RAID=1 00:01:17.604 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:17.604 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:17.604 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:17.604 ++ RUN_NIGHTLY=1 00:01:17.604 + cd /var/jenkins/workspace/raid-vg-autotest 00:01:17.604 + nvme_files=() 00:01:17.604 + declare -A nvme_files 00:01:17.604 + backend_dir=/var/lib/libvirt/images/backends 00:01:17.604 + nvme_files['nvme.img']=5G 00:01:17.604 + nvme_files['nvme-cmb.img']=5G 00:01:17.604 + nvme_files['nvme-multi0.img']=4G 00:01:17.604 + nvme_files['nvme-multi1.img']=4G 00:01:17.604 + nvme_files['nvme-multi2.img']=4G 00:01:17.604 + nvme_files['nvme-openstack.img']=8G 00:01:17.604 + nvme_files['nvme-zns.img']=5G 00:01:17.604 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:17.604 + (( SPDK_TEST_FTL == 1 )) 00:01:17.604 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:17.604 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:17.604 + for nvme in "${!nvme_files[@]}" 00:01:17.604 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:01:17.604 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:17.604 + for nvme in "${!nvme_files[@]}" 00:01:17.604 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:01:17.604 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:17.604 + for nvme in "${!nvme_files[@]}" 00:01:17.604 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:01:17.604 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:17.604 + for nvme in "${!nvme_files[@]}" 00:01:17.604 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:01:17.604 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:17.604 + for nvme in "${!nvme_files[@]}" 00:01:17.604 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:01:17.604 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:17.604 + for nvme in "${!nvme_files[@]}" 00:01:17.604 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:01:17.604 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:17.604 + for nvme in "${!nvme_files[@]}" 00:01:17.604 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:01:17.604 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:17.865 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:01:17.865 + echo 'End stage prepare_nvme.sh' 00:01:17.865 End stage prepare_nvme.sh 00:01:17.877 [Pipeline] sh 00:01:18.161 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:18.161 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora39 00:01:18.161 00:01:18.161 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:01:18.161 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:01:18.162 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:01:18.162 HELP=0 00:01:18.162 DRY_RUN=0 00:01:18.162 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:01:18.162 NVME_DISKS_TYPE=nvme,nvme, 00:01:18.162 NVME_AUTO_CREATE=0 00:01:18.162 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:01:18.162 NVME_CMB=,, 00:01:18.162 NVME_PMR=,, 00:01:18.162 NVME_ZNS=,, 00:01:18.162 NVME_MS=,, 00:01:18.162 NVME_FDP=,, 00:01:18.162 SPDK_VAGRANT_DISTRO=fedora39 00:01:18.162 SPDK_VAGRANT_VMCPU=10 00:01:18.162 SPDK_VAGRANT_VMRAM=12288 00:01:18.162 SPDK_VAGRANT_PROVIDER=libvirt 00:01:18.162 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:18.162 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:18.162 SPDK_OPENSTACK_NETWORK=0 00:01:18.162 VAGRANT_PACKAGE_BOX=0 00:01:18.162 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:18.162 FORCE_DISTRO=true 00:01:18.162 VAGRANT_BOX_VERSION= 00:01:18.162 EXTRA_VAGRANTFILES= 00:01:18.162 NIC_MODEL=virtio 00:01:18.162 00:01:18.162 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:01:18.162 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:01:20.068 Bringing machine 'default' up with 'libvirt' provider... 00:01:20.328 ==> default: Creating image (snapshot of base box volume). 00:01:20.598 ==> default: Creating domain with the following settings... 00:01:20.598 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1734063440_6b78ec9537c0612a4b54 00:01:20.598 ==> default: -- Domain type: kvm 00:01:20.598 ==> default: -- Cpus: 10 00:01:20.598 ==> default: -- Feature: acpi 00:01:20.598 ==> default: -- Feature: apic 00:01:20.598 ==> default: -- Feature: pae 00:01:20.598 ==> default: -- Memory: 12288M 00:01:20.598 ==> default: -- Memory Backing: hugepages: 00:01:20.598 ==> default: -- Management MAC: 00:01:20.598 ==> default: -- Loader: 00:01:20.598 ==> default: -- Nvram: 00:01:20.598 ==> default: -- Base box: spdk/fedora39 00:01:20.598 ==> default: -- Storage pool: default 00:01:20.598 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1734063440_6b78ec9537c0612a4b54.img (20G) 00:01:20.598 ==> default: -- Volume Cache: default 00:01:20.598 ==> default: -- Kernel: 00:01:20.598 ==> default: -- Initrd: 00:01:20.598 ==> default: -- Graphics Type: vnc 00:01:20.598 ==> default: -- Graphics Port: -1 00:01:20.598 ==> default: -- Graphics IP: 127.0.0.1 00:01:20.598 ==> default: -- Graphics Password: Not defined 00:01:20.598 ==> default: -- Video Type: cirrus 00:01:20.598 ==> default: -- Video VRAM: 9216 00:01:20.598 ==> default: -- Sound Type: 00:01:20.598 ==> default: -- Keymap: en-us 00:01:20.598 ==> default: -- TPM Path: 00:01:20.598 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:20.598 ==> default: -- Command line args: 00:01:20.598 ==> default: -> value=-device, 00:01:20.598 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:20.598 ==> default: -> value=-drive, 00:01:20.598 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:01:20.598 ==> default: -> value=-device, 00:01:20.598 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:20.598 ==> default: -> value=-device, 00:01:20.598 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:20.598 ==> default: -> value=-drive, 00:01:20.599 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:20.599 ==> default: -> value=-device, 00:01:20.599 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:20.599 ==> default: -> value=-drive, 00:01:20.599 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:20.599 ==> default: -> value=-device, 00:01:20.599 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:20.599 ==> default: -> value=-drive, 00:01:20.599 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:20.599 ==> default: -> value=-device, 00:01:20.599 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:20.599 ==> default: Creating shared folders metadata... 00:01:20.599 ==> default: Starting domain. 00:01:22.596 ==> default: Waiting for domain to get an IP address... 00:01:40.703 ==> default: Waiting for SSH to become available... 00:01:40.703 ==> default: Configuring and enabling network interfaces... 00:01:45.979 default: SSH address: 192.168.121.55:22 00:01:45.979 default: SSH username: vagrant 00:01:45.979 default: SSH auth method: private key 00:01:47.884 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:56.004 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:02.583 ==> default: Mounting SSHFS shared folder... 00:02:03.964 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:03.964 ==> default: Checking Mount.. 00:02:05.875 ==> default: Folder Successfully Mounted! 00:02:05.876 ==> default: Running provisioner: file... 00:02:06.815 default: ~/.gitconfig => .gitconfig 00:02:07.385 00:02:07.385 SUCCESS! 00:02:07.385 00:02:07.385 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:07.385 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:07.385 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:07.385 00:02:07.486 [Pipeline] } 00:02:07.499 [Pipeline] // stage 00:02:07.506 [Pipeline] dir 00:02:07.507 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:02:07.509 [Pipeline] { 00:02:07.520 [Pipeline] catchError 00:02:07.521 [Pipeline] { 00:02:07.533 [Pipeline] sh 00:02:07.816 + vagrant ssh-config --host vagrant 00:02:07.816 + sed -ne /^Host/,$p 00:02:07.816 + tee ssh_conf 00:02:10.351 Host vagrant 00:02:10.351 HostName 192.168.121.55 00:02:10.351 User vagrant 00:02:10.351 Port 22 00:02:10.351 UserKnownHostsFile /dev/null 00:02:10.351 StrictHostKeyChecking no 00:02:10.351 PasswordAuthentication no 00:02:10.351 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:10.351 IdentitiesOnly yes 00:02:10.351 LogLevel FATAL 00:02:10.351 ForwardAgent yes 00:02:10.351 ForwardX11 yes 00:02:10.351 00:02:10.364 [Pipeline] withEnv 00:02:10.366 [Pipeline] { 00:02:10.378 [Pipeline] sh 00:02:10.662 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:10.663 source /etc/os-release 00:02:10.663 [[ -e /image.version ]] && img=$(< /image.version) 00:02:10.663 # Minimal, systemd-like check. 00:02:10.663 if [[ -e /.dockerenv ]]; then 00:02:10.663 # Clear garbage from the node's name: 00:02:10.663 # agt-er_autotest_547-896 -> autotest_547-896 00:02:10.663 # $HOSTNAME is the actual container id 00:02:10.663 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:10.663 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:10.663 # We can assume this is a mount from a host where container is running, 00:02:10.663 # so fetch its hostname to easily identify the target swarm worker. 00:02:10.663 container="$(< /etc/hostname) ($agent)" 00:02:10.663 else 00:02:10.663 # Fallback 00:02:10.663 container=$agent 00:02:10.663 fi 00:02:10.663 fi 00:02:10.663 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:10.663 00:02:10.935 [Pipeline] } 00:02:10.949 [Pipeline] // withEnv 00:02:10.956 [Pipeline] setCustomBuildProperty 00:02:10.968 [Pipeline] stage 00:02:10.970 [Pipeline] { (Tests) 00:02:10.984 [Pipeline] sh 00:02:11.267 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:11.539 [Pipeline] sh 00:02:11.822 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:12.095 [Pipeline] timeout 00:02:12.095 Timeout set to expire in 1 hr 30 min 00:02:12.097 [Pipeline] { 00:02:12.109 [Pipeline] sh 00:02:12.388 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:12.956 HEAD is now at e01cb43b8 mk/spdk.common.mk sed the minor version 00:02:12.967 [Pipeline] sh 00:02:13.250 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:13.524 [Pipeline] sh 00:02:13.809 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:14.084 [Pipeline] sh 00:02:14.366 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:02:14.625 ++ readlink -f spdk_repo 00:02:14.625 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:14.625 + [[ -n /home/vagrant/spdk_repo ]] 00:02:14.625 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:14.625 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:14.625 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:14.625 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:14.625 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:14.625 + [[ raid-vg-autotest == pkgdep-* ]] 00:02:14.625 + cd /home/vagrant/spdk_repo 00:02:14.625 + source /etc/os-release 00:02:14.625 ++ NAME='Fedora Linux' 00:02:14.625 ++ VERSION='39 (Cloud Edition)' 00:02:14.625 ++ ID=fedora 00:02:14.625 ++ VERSION_ID=39 00:02:14.625 ++ VERSION_CODENAME= 00:02:14.625 ++ PLATFORM_ID=platform:f39 00:02:14.625 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:14.625 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:14.625 ++ LOGO=fedora-logo-icon 00:02:14.625 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:14.625 ++ HOME_URL=https://fedoraproject.org/ 00:02:14.625 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:14.625 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:14.625 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:14.625 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:14.625 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:14.625 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:14.625 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:14.625 ++ SUPPORT_END=2024-11-12 00:02:14.625 ++ VARIANT='Cloud Edition' 00:02:14.625 ++ VARIANT_ID=cloud 00:02:14.625 + uname -a 00:02:14.625 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:14.625 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:15.195 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:15.195 Hugepages 00:02:15.195 node hugesize free / total 00:02:15.195 node0 1048576kB 0 / 0 00:02:15.195 node0 2048kB 0 / 0 00:02:15.195 00:02:15.195 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:15.195 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:15.195 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:15.195 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:15.195 + rm -f /tmp/spdk-ld-path 00:02:15.195 + source autorun-spdk.conf 00:02:15.195 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:15.195 ++ SPDK_RUN_ASAN=1 00:02:15.195 ++ SPDK_RUN_UBSAN=1 00:02:15.195 ++ SPDK_TEST_RAID=1 00:02:15.195 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:15.195 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:15.195 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:15.195 ++ RUN_NIGHTLY=1 00:02:15.195 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:15.195 + [[ -n '' ]] 00:02:15.195 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:15.195 + for M in /var/spdk/build-*-manifest.txt 00:02:15.195 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:15.195 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:15.195 + for M in /var/spdk/build-*-manifest.txt 00:02:15.195 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:15.195 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:15.454 + for M in /var/spdk/build-*-manifest.txt 00:02:15.454 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:15.454 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:15.454 ++ uname 00:02:15.454 + [[ Linux == \L\i\n\u\x ]] 00:02:15.454 + sudo dmesg -T 00:02:15.454 + sudo dmesg --clear 00:02:15.454 + dmesg_pid=6165 00:02:15.454 + sudo dmesg -Tw 00:02:15.454 + [[ Fedora Linux == FreeBSD ]] 00:02:15.454 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:15.454 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:15.454 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:15.454 + [[ -x /usr/src/fio-static/fio ]] 00:02:15.454 + export FIO_BIN=/usr/src/fio-static/fio 00:02:15.454 + FIO_BIN=/usr/src/fio-static/fio 00:02:15.454 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:15.454 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:15.454 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:15.454 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:15.454 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:15.454 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:15.454 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:15.454 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:15.454 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:15.454 04:18:15 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:15.454 04:18:15 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:15.454 04:18:15 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:15.454 04:18:15 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:02:15.454 04:18:15 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:02:15.454 04:18:15 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:02:15.454 04:18:15 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:15.454 04:18:15 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:15.454 04:18:15 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:15.454 04:18:15 -- spdk_repo/autorun-spdk.conf@8 -- $ RUN_NIGHTLY=1 00:02:15.454 04:18:15 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:15.454 04:18:15 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:15.712 04:18:15 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:15.712 04:18:15 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:15.712 04:18:15 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:15.712 04:18:15 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:15.712 04:18:15 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:15.712 04:18:15 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:15.712 04:18:15 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:15.712 04:18:15 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:15.713 04:18:15 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:15.713 04:18:15 -- paths/export.sh@5 -- $ export PATH 00:02:15.713 04:18:15 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:15.713 04:18:15 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:15.713 04:18:15 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:15.713 04:18:15 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1734063495.XXXXXX 00:02:15.713 04:18:15 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1734063495.BB5mE1 00:02:15.713 04:18:15 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:15.713 04:18:15 -- common/autobuild_common.sh@499 -- $ '[' -n v22.11.4 ']' 00:02:15.713 04:18:15 -- common/autobuild_common.sh@500 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:15.713 04:18:15 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:15.713 04:18:15 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:15.713 04:18:15 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:15.713 04:18:15 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:15.713 04:18:15 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:15.713 04:18:15 -- common/autotest_common.sh@10 -- $ set +x 00:02:15.713 04:18:15 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:15.713 04:18:15 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:15.713 04:18:15 -- pm/common@17 -- $ local monitor 00:02:15.713 04:18:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:15.713 04:18:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:15.713 04:18:15 -- pm/common@21 -- $ date +%s 00:02:15.713 04:18:15 -- pm/common@25 -- $ sleep 1 00:02:15.713 04:18:15 -- pm/common@21 -- $ date +%s 00:02:15.713 04:18:15 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734063495 00:02:15.713 04:18:15 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734063495 00:02:15.713 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734063495_collect-cpu-load.pm.log 00:02:15.713 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734063495_collect-vmstat.pm.log 00:02:16.649 04:18:16 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:16.649 04:18:16 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:16.649 04:18:16 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:16.649 04:18:16 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:16.649 04:18:16 -- spdk/autobuild.sh@16 -- $ date -u 00:02:16.649 Fri Dec 13 04:18:16 AM UTC 2024 00:02:16.649 04:18:16 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:16.649 v25.01-rc1-2-ge01cb43b8 00:02:16.649 04:18:16 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:16.649 04:18:16 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:16.649 04:18:16 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:16.649 04:18:16 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:16.649 04:18:16 -- common/autotest_common.sh@10 -- $ set +x 00:02:16.649 ************************************ 00:02:16.649 START TEST asan 00:02:16.649 ************************************ 00:02:16.649 using asan 00:02:16.649 04:18:16 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:16.649 00:02:16.649 real 0m0.000s 00:02:16.649 user 0m0.000s 00:02:16.649 sys 0m0.000s 00:02:16.649 04:18:16 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:16.649 04:18:16 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:16.649 ************************************ 00:02:16.649 END TEST asan 00:02:16.649 ************************************ 00:02:16.909 04:18:16 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:16.909 04:18:16 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:16.909 04:18:16 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:16.909 04:18:16 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:16.909 04:18:16 -- common/autotest_common.sh@10 -- $ set +x 00:02:16.909 ************************************ 00:02:16.909 START TEST ubsan 00:02:16.909 ************************************ 00:02:16.909 using ubsan 00:02:16.909 04:18:16 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:16.909 00:02:16.909 real 0m0.000s 00:02:16.909 user 0m0.000s 00:02:16.909 sys 0m0.000s 00:02:16.909 04:18:16 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:16.909 04:18:16 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:16.909 ************************************ 00:02:16.909 END TEST ubsan 00:02:16.909 ************************************ 00:02:16.909 04:18:16 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:16.909 04:18:16 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:16.909 04:18:16 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:16.909 04:18:16 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:02:16.909 04:18:16 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:16.909 04:18:16 -- common/autotest_common.sh@10 -- $ set +x 00:02:16.909 ************************************ 00:02:16.909 START TEST build_native_dpdk 00:02:16.909 ************************************ 00:02:16.909 04:18:16 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:02:16.909 04:18:16 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:16.909 04:18:16 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:16.909 04:18:16 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:16.909 04:18:16 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:16.909 04:18:16 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:16.909 04:18:16 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:16.909 04:18:16 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:16.909 04:18:16 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:16.909 04:18:16 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:16.909 04:18:16 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:16.909 04:18:16 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:16.909 04:18:16 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:16.909 04:18:16 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:16.909 04:18:16 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:16.909 04:18:16 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:16.909 04:18:16 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:16.909 04:18:16 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:16.909 04:18:16 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:16.909 04:18:16 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:16.909 04:18:16 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:16.909 caf0f5d395 version: 22.11.4 00:02:16.909 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:16.909 dc9c799c7d vhost: fix missing spinlock unlock 00:02:16.909 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:16.909 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:16.909 04:18:16 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:16.909 04:18:16 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:16.909 04:18:16 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:16.909 04:18:16 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:16.909 04:18:16 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:16.909 04:18:16 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:16.909 04:18:16 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:16.909 04:18:16 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:16.909 04:18:16 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:16.909 04:18:16 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:02:16.909 04:18:16 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:02:16.909 04:18:16 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:16.909 04:18:16 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:16.909 04:18:16 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:02:16.909 04:18:16 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:16.909 04:18:16 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:02:16.909 04:18:16 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:02:16.909 04:18:16 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 21.11.0 00:02:16.909 04:18:16 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:16.909 04:18:16 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:16.909 04:18:16 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:16.909 04:18:16 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:16.909 04:18:16 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:16.909 04:18:16 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:16.909 04:18:16 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:16.909 04:18:16 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:16.909 04:18:16 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:16.909 04:18:16 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:16.909 04:18:16 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:16.909 04:18:16 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:16.909 04:18:16 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:16.909 04:18:16 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:16.909 04:18:16 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:16.909 04:18:16 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:16.909 04:18:16 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:16.909 04:18:16 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:16.909 04:18:16 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:16.909 04:18:16 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:16.909 04:18:16 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:16.909 04:18:16 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:16.909 04:18:16 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:16.909 04:18:16 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:16.909 04:18:16 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:16.909 04:18:16 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:16.909 04:18:16 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:16.909 04:18:16 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:02:16.909 patching file config/rte_config.h 00:02:16.909 Hunk #1 succeeded at 60 (offset 1 line). 00:02:16.909 04:18:16 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 22.11.4 24.07.0 00:02:16.909 04:18:16 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:16.909 04:18:16 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:16.909 04:18:16 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:16.909 04:18:16 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:16.909 04:18:16 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:16.909 04:18:16 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:16.909 04:18:16 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:16.909 04:18:16 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:16.909 04:18:16 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:16.909 04:18:16 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:16.909 04:18:16 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:16.909 04:18:16 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:16.909 04:18:16 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:16.909 04:18:16 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:16.909 04:18:16 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:16.910 04:18:16 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:16.910 04:18:16 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:16.910 04:18:16 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:16.910 04:18:16 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:16.910 04:18:16 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:16.910 04:18:16 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:16.910 04:18:16 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:16.910 04:18:16 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:16.910 04:18:16 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:16.910 04:18:16 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:16.910 04:18:16 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:16.910 04:18:16 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:16.910 04:18:16 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:16.910 04:18:16 build_native_dpdk -- common/autobuild_common.sh@184 -- $ patch -p1 00:02:16.910 patching file lib/pcapng/rte_pcapng.c 00:02:16.910 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:16.910 04:18:16 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 22.11.4 24.07.0 00:02:16.910 04:18:16 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:02:16.910 04:18:16 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:16.910 04:18:16 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:16.910 04:18:16 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:16.910 04:18:16 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:16.910 04:18:16 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:16.910 04:18:16 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:16.910 04:18:16 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:16.910 04:18:16 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:16.910 04:18:16 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:16.910 04:18:16 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:16.910 04:18:16 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:16.910 04:18:16 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:16.910 04:18:16 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:16.910 04:18:16 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:16.910 04:18:16 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:16.910 04:18:16 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:16.910 04:18:16 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:16.910 04:18:16 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:16.910 04:18:16 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:16.910 04:18:16 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:16.910 04:18:16 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:16.910 04:18:16 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:16.910 04:18:16 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:16.910 04:18:16 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:16.910 04:18:16 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:16.910 04:18:16 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:16.910 04:18:16 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:16.910 04:18:16 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:02:16.910 04:18:16 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:16.910 04:18:16 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:02:16.910 04:18:16 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:02:16.910 04:18:16 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:23.481 The Meson build system 00:02:23.481 Version: 1.5.0 00:02:23.481 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:23.481 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:23.481 Build type: native build 00:02:23.481 Program cat found: YES (/usr/bin/cat) 00:02:23.481 Project name: DPDK 00:02:23.481 Project version: 22.11.4 00:02:23.481 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:23.481 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:23.481 Host machine cpu family: x86_64 00:02:23.481 Host machine cpu: x86_64 00:02:23.481 Message: ## Building in Developer Mode ## 00:02:23.481 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:23.481 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:23.481 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:23.481 Program objdump found: YES (/usr/bin/objdump) 00:02:23.481 Program python3 found: YES (/usr/bin/python3) 00:02:23.481 Program cat found: YES (/usr/bin/cat) 00:02:23.481 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:23.481 Checking for size of "void *" : 8 00:02:23.481 Checking for size of "void *" : 8 (cached) 00:02:23.481 Library m found: YES 00:02:23.481 Library numa found: YES 00:02:23.481 Has header "numaif.h" : YES 00:02:23.481 Library fdt found: NO 00:02:23.481 Library execinfo found: NO 00:02:23.481 Has header "execinfo.h" : YES 00:02:23.481 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:23.481 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:23.481 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:23.481 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:23.481 Run-time dependency openssl found: YES 3.1.1 00:02:23.482 Run-time dependency libpcap found: YES 1.10.4 00:02:23.482 Has header "pcap.h" with dependency libpcap: YES 00:02:23.482 Compiler for C supports arguments -Wcast-qual: YES 00:02:23.482 Compiler for C supports arguments -Wdeprecated: YES 00:02:23.482 Compiler for C supports arguments -Wformat: YES 00:02:23.482 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:23.482 Compiler for C supports arguments -Wformat-security: NO 00:02:23.482 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:23.482 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:23.482 Compiler for C supports arguments -Wnested-externs: YES 00:02:23.482 Compiler for C supports arguments -Wold-style-definition: YES 00:02:23.482 Compiler for C supports arguments -Wpointer-arith: YES 00:02:23.482 Compiler for C supports arguments -Wsign-compare: YES 00:02:23.482 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:23.482 Compiler for C supports arguments -Wundef: YES 00:02:23.482 Compiler for C supports arguments -Wwrite-strings: YES 00:02:23.482 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:23.482 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:23.482 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:23.482 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:23.482 Compiler for C supports arguments -mavx512f: YES 00:02:23.482 Checking if "AVX512 checking" compiles: YES 00:02:23.482 Fetching value of define "__SSE4_2__" : 1 00:02:23.482 Fetching value of define "__AES__" : 1 00:02:23.482 Fetching value of define "__AVX__" : 1 00:02:23.482 Fetching value of define "__AVX2__" : 1 00:02:23.482 Fetching value of define "__AVX512BW__" : 1 00:02:23.482 Fetching value of define "__AVX512CD__" : 1 00:02:23.482 Fetching value of define "__AVX512DQ__" : 1 00:02:23.482 Fetching value of define "__AVX512F__" : 1 00:02:23.482 Fetching value of define "__AVX512VL__" : 1 00:02:23.482 Fetching value of define "__PCLMUL__" : 1 00:02:23.482 Fetching value of define "__RDRND__" : 1 00:02:23.482 Fetching value of define "__RDSEED__" : 1 00:02:23.482 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:23.482 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:23.482 Message: lib/kvargs: Defining dependency "kvargs" 00:02:23.482 Message: lib/telemetry: Defining dependency "telemetry" 00:02:23.482 Checking for function "getentropy" : YES 00:02:23.482 Message: lib/eal: Defining dependency "eal" 00:02:23.482 Message: lib/ring: Defining dependency "ring" 00:02:23.482 Message: lib/rcu: Defining dependency "rcu" 00:02:23.482 Message: lib/mempool: Defining dependency "mempool" 00:02:23.482 Message: lib/mbuf: Defining dependency "mbuf" 00:02:23.482 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:23.482 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:23.482 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:23.482 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:23.482 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:23.482 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:23.482 Compiler for C supports arguments -mpclmul: YES 00:02:23.482 Compiler for C supports arguments -maes: YES 00:02:23.482 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:23.482 Compiler for C supports arguments -mavx512bw: YES 00:02:23.482 Compiler for C supports arguments -mavx512dq: YES 00:02:23.482 Compiler for C supports arguments -mavx512vl: YES 00:02:23.482 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:23.482 Compiler for C supports arguments -mavx2: YES 00:02:23.482 Compiler for C supports arguments -mavx: YES 00:02:23.482 Message: lib/net: Defining dependency "net" 00:02:23.482 Message: lib/meter: Defining dependency "meter" 00:02:23.482 Message: lib/ethdev: Defining dependency "ethdev" 00:02:23.482 Message: lib/pci: Defining dependency "pci" 00:02:23.482 Message: lib/cmdline: Defining dependency "cmdline" 00:02:23.482 Message: lib/metrics: Defining dependency "metrics" 00:02:23.482 Message: lib/hash: Defining dependency "hash" 00:02:23.482 Message: lib/timer: Defining dependency "timer" 00:02:23.482 Fetching value of define "__AVX2__" : 1 (cached) 00:02:23.482 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:23.482 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:23.482 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:23.482 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:23.482 Message: lib/acl: Defining dependency "acl" 00:02:23.482 Message: lib/bbdev: Defining dependency "bbdev" 00:02:23.482 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:23.482 Run-time dependency libelf found: YES 0.191 00:02:23.482 Message: lib/bpf: Defining dependency "bpf" 00:02:23.482 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:23.482 Message: lib/compressdev: Defining dependency "compressdev" 00:02:23.482 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:23.482 Message: lib/distributor: Defining dependency "distributor" 00:02:23.482 Message: lib/efd: Defining dependency "efd" 00:02:23.482 Message: lib/eventdev: Defining dependency "eventdev" 00:02:23.482 Message: lib/gpudev: Defining dependency "gpudev" 00:02:23.482 Message: lib/gro: Defining dependency "gro" 00:02:23.482 Message: lib/gso: Defining dependency "gso" 00:02:23.482 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:23.482 Message: lib/jobstats: Defining dependency "jobstats" 00:02:23.482 Message: lib/latencystats: Defining dependency "latencystats" 00:02:23.482 Message: lib/lpm: Defining dependency "lpm" 00:02:23.482 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:23.482 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:23.482 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:23.482 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:23.482 Message: lib/member: Defining dependency "member" 00:02:23.482 Message: lib/pcapng: Defining dependency "pcapng" 00:02:23.482 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:23.482 Message: lib/power: Defining dependency "power" 00:02:23.482 Message: lib/rawdev: Defining dependency "rawdev" 00:02:23.482 Message: lib/regexdev: Defining dependency "regexdev" 00:02:23.482 Message: lib/dmadev: Defining dependency "dmadev" 00:02:23.482 Message: lib/rib: Defining dependency "rib" 00:02:23.482 Message: lib/reorder: Defining dependency "reorder" 00:02:23.482 Message: lib/sched: Defining dependency "sched" 00:02:23.482 Message: lib/security: Defining dependency "security" 00:02:23.482 Message: lib/stack: Defining dependency "stack" 00:02:23.482 Has header "linux/userfaultfd.h" : YES 00:02:23.482 Message: lib/vhost: Defining dependency "vhost" 00:02:23.482 Message: lib/ipsec: Defining dependency "ipsec" 00:02:23.482 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:23.482 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:23.482 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:23.482 Message: lib/fib: Defining dependency "fib" 00:02:23.482 Message: lib/port: Defining dependency "port" 00:02:23.482 Message: lib/pdump: Defining dependency "pdump" 00:02:23.482 Message: lib/table: Defining dependency "table" 00:02:23.482 Message: lib/pipeline: Defining dependency "pipeline" 00:02:23.482 Message: lib/graph: Defining dependency "graph" 00:02:23.482 Message: lib/node: Defining dependency "node" 00:02:23.482 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:23.482 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:23.482 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:23.482 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:23.482 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:23.482 Compiler for C supports arguments -Wno-unused-value: YES 00:02:23.482 Compiler for C supports arguments -Wno-format: YES 00:02:23.482 Compiler for C supports arguments -Wno-format-security: YES 00:02:23.482 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:23.482 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:23.742 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:23.742 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:23.742 Fetching value of define "__AVX2__" : 1 (cached) 00:02:23.742 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:23.742 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:23.742 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:23.742 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:23.742 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:23.742 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:23.742 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:23.742 Configuring doxy-api.conf using configuration 00:02:23.742 Program sphinx-build found: NO 00:02:23.742 Configuring rte_build_config.h using configuration 00:02:23.742 Message: 00:02:23.742 ================= 00:02:23.742 Applications Enabled 00:02:23.742 ================= 00:02:23.742 00:02:23.742 apps: 00:02:23.742 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:23.742 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:23.742 test-security-perf, 00:02:23.742 00:02:23.742 Message: 00:02:23.742 ================= 00:02:23.742 Libraries Enabled 00:02:23.742 ================= 00:02:23.742 00:02:23.742 libs: 00:02:23.742 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:23.742 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:23.742 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:23.742 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:23.742 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:23.742 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:23.742 table, pipeline, graph, node, 00:02:23.742 00:02:23.742 Message: 00:02:23.742 =============== 00:02:23.742 Drivers Enabled 00:02:23.742 =============== 00:02:23.742 00:02:23.742 common: 00:02:23.742 00:02:23.742 bus: 00:02:23.742 pci, vdev, 00:02:23.742 mempool: 00:02:23.742 ring, 00:02:23.742 dma: 00:02:23.742 00:02:23.742 net: 00:02:23.742 i40e, 00:02:23.742 raw: 00:02:23.742 00:02:23.742 crypto: 00:02:23.742 00:02:23.742 compress: 00:02:23.742 00:02:23.742 regex: 00:02:23.742 00:02:23.742 vdpa: 00:02:23.742 00:02:23.742 event: 00:02:23.742 00:02:23.742 baseband: 00:02:23.742 00:02:23.742 gpu: 00:02:23.742 00:02:23.742 00:02:23.742 Message: 00:02:23.742 ================= 00:02:23.742 Content Skipped 00:02:23.742 ================= 00:02:23.742 00:02:23.742 apps: 00:02:23.742 00:02:23.742 libs: 00:02:23.742 kni: explicitly disabled via build config (deprecated lib) 00:02:23.742 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:23.742 00:02:23.742 drivers: 00:02:23.742 common/cpt: not in enabled drivers build config 00:02:23.742 common/dpaax: not in enabled drivers build config 00:02:23.742 common/iavf: not in enabled drivers build config 00:02:23.742 common/idpf: not in enabled drivers build config 00:02:23.742 common/mvep: not in enabled drivers build config 00:02:23.742 common/octeontx: not in enabled drivers build config 00:02:23.742 bus/auxiliary: not in enabled drivers build config 00:02:23.742 bus/dpaa: not in enabled drivers build config 00:02:23.742 bus/fslmc: not in enabled drivers build config 00:02:23.742 bus/ifpga: not in enabled drivers build config 00:02:23.742 bus/vmbus: not in enabled drivers build config 00:02:23.742 common/cnxk: not in enabled drivers build config 00:02:23.742 common/mlx5: not in enabled drivers build config 00:02:23.742 common/qat: not in enabled drivers build config 00:02:23.742 common/sfc_efx: not in enabled drivers build config 00:02:23.742 mempool/bucket: not in enabled drivers build config 00:02:23.742 mempool/cnxk: not in enabled drivers build config 00:02:23.742 mempool/dpaa: not in enabled drivers build config 00:02:23.743 mempool/dpaa2: not in enabled drivers build config 00:02:23.743 mempool/octeontx: not in enabled drivers build config 00:02:23.743 mempool/stack: not in enabled drivers build config 00:02:23.743 dma/cnxk: not in enabled drivers build config 00:02:23.743 dma/dpaa: not in enabled drivers build config 00:02:23.743 dma/dpaa2: not in enabled drivers build config 00:02:23.743 dma/hisilicon: not in enabled drivers build config 00:02:23.743 dma/idxd: not in enabled drivers build config 00:02:23.743 dma/ioat: not in enabled drivers build config 00:02:23.743 dma/skeleton: not in enabled drivers build config 00:02:23.743 net/af_packet: not in enabled drivers build config 00:02:23.743 net/af_xdp: not in enabled drivers build config 00:02:23.743 net/ark: not in enabled drivers build config 00:02:23.743 net/atlantic: not in enabled drivers build config 00:02:23.743 net/avp: not in enabled drivers build config 00:02:23.743 net/axgbe: not in enabled drivers build config 00:02:23.743 net/bnx2x: not in enabled drivers build config 00:02:23.743 net/bnxt: not in enabled drivers build config 00:02:23.743 net/bonding: not in enabled drivers build config 00:02:23.743 net/cnxk: not in enabled drivers build config 00:02:23.743 net/cxgbe: not in enabled drivers build config 00:02:23.743 net/dpaa: not in enabled drivers build config 00:02:23.743 net/dpaa2: not in enabled drivers build config 00:02:23.743 net/e1000: not in enabled drivers build config 00:02:23.743 net/ena: not in enabled drivers build config 00:02:23.743 net/enetc: not in enabled drivers build config 00:02:23.743 net/enetfec: not in enabled drivers build config 00:02:23.743 net/enic: not in enabled drivers build config 00:02:23.743 net/failsafe: not in enabled drivers build config 00:02:23.743 net/fm10k: not in enabled drivers build config 00:02:23.743 net/gve: not in enabled drivers build config 00:02:23.743 net/hinic: not in enabled drivers build config 00:02:23.743 net/hns3: not in enabled drivers build config 00:02:23.743 net/iavf: not in enabled drivers build config 00:02:23.743 net/ice: not in enabled drivers build config 00:02:23.743 net/idpf: not in enabled drivers build config 00:02:23.743 net/igc: not in enabled drivers build config 00:02:23.743 net/ionic: not in enabled drivers build config 00:02:23.743 net/ipn3ke: not in enabled drivers build config 00:02:23.743 net/ixgbe: not in enabled drivers build config 00:02:23.743 net/kni: not in enabled drivers build config 00:02:23.743 net/liquidio: not in enabled drivers build config 00:02:23.743 net/mana: not in enabled drivers build config 00:02:23.743 net/memif: not in enabled drivers build config 00:02:23.743 net/mlx4: not in enabled drivers build config 00:02:23.743 net/mlx5: not in enabled drivers build config 00:02:23.743 net/mvneta: not in enabled drivers build config 00:02:23.743 net/mvpp2: not in enabled drivers build config 00:02:23.743 net/netvsc: not in enabled drivers build config 00:02:23.743 net/nfb: not in enabled drivers build config 00:02:23.743 net/nfp: not in enabled drivers build config 00:02:23.743 net/ngbe: not in enabled drivers build config 00:02:23.743 net/null: not in enabled drivers build config 00:02:23.743 net/octeontx: not in enabled drivers build config 00:02:23.743 net/octeon_ep: not in enabled drivers build config 00:02:23.743 net/pcap: not in enabled drivers build config 00:02:23.743 net/pfe: not in enabled drivers build config 00:02:23.743 net/qede: not in enabled drivers build config 00:02:23.743 net/ring: not in enabled drivers build config 00:02:23.743 net/sfc: not in enabled drivers build config 00:02:23.743 net/softnic: not in enabled drivers build config 00:02:23.743 net/tap: not in enabled drivers build config 00:02:23.743 net/thunderx: not in enabled drivers build config 00:02:23.743 net/txgbe: not in enabled drivers build config 00:02:23.743 net/vdev_netvsc: not in enabled drivers build config 00:02:23.743 net/vhost: not in enabled drivers build config 00:02:23.743 net/virtio: not in enabled drivers build config 00:02:23.743 net/vmxnet3: not in enabled drivers build config 00:02:23.743 raw/cnxk_bphy: not in enabled drivers build config 00:02:23.743 raw/cnxk_gpio: not in enabled drivers build config 00:02:23.743 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:23.743 raw/ifpga: not in enabled drivers build config 00:02:23.743 raw/ntb: not in enabled drivers build config 00:02:23.743 raw/skeleton: not in enabled drivers build config 00:02:23.743 crypto/armv8: not in enabled drivers build config 00:02:23.743 crypto/bcmfs: not in enabled drivers build config 00:02:23.743 crypto/caam_jr: not in enabled drivers build config 00:02:23.743 crypto/ccp: not in enabled drivers build config 00:02:23.743 crypto/cnxk: not in enabled drivers build config 00:02:23.743 crypto/dpaa_sec: not in enabled drivers build config 00:02:23.743 crypto/dpaa2_sec: not in enabled drivers build config 00:02:23.743 crypto/ipsec_mb: not in enabled drivers build config 00:02:23.743 crypto/mlx5: not in enabled drivers build config 00:02:23.743 crypto/mvsam: not in enabled drivers build config 00:02:23.743 crypto/nitrox: not in enabled drivers build config 00:02:23.743 crypto/null: not in enabled drivers build config 00:02:23.743 crypto/octeontx: not in enabled drivers build config 00:02:23.743 crypto/openssl: not in enabled drivers build config 00:02:23.743 crypto/scheduler: not in enabled drivers build config 00:02:23.743 crypto/uadk: not in enabled drivers build config 00:02:23.743 crypto/virtio: not in enabled drivers build config 00:02:23.743 compress/isal: not in enabled drivers build config 00:02:23.743 compress/mlx5: not in enabled drivers build config 00:02:23.743 compress/octeontx: not in enabled drivers build config 00:02:23.743 compress/zlib: not in enabled drivers build config 00:02:23.743 regex/mlx5: not in enabled drivers build config 00:02:23.743 regex/cn9k: not in enabled drivers build config 00:02:23.743 vdpa/ifc: not in enabled drivers build config 00:02:23.743 vdpa/mlx5: not in enabled drivers build config 00:02:23.743 vdpa/sfc: not in enabled drivers build config 00:02:23.743 event/cnxk: not in enabled drivers build config 00:02:23.743 event/dlb2: not in enabled drivers build config 00:02:23.743 event/dpaa: not in enabled drivers build config 00:02:23.743 event/dpaa2: not in enabled drivers build config 00:02:23.743 event/dsw: not in enabled drivers build config 00:02:23.743 event/opdl: not in enabled drivers build config 00:02:23.743 event/skeleton: not in enabled drivers build config 00:02:23.743 event/sw: not in enabled drivers build config 00:02:23.743 event/octeontx: not in enabled drivers build config 00:02:23.743 baseband/acc: not in enabled drivers build config 00:02:23.743 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:23.743 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:23.743 baseband/la12xx: not in enabled drivers build config 00:02:23.743 baseband/null: not in enabled drivers build config 00:02:23.743 baseband/turbo_sw: not in enabled drivers build config 00:02:23.743 gpu/cuda: not in enabled drivers build config 00:02:23.743 00:02:23.743 00:02:23.743 Build targets in project: 311 00:02:23.743 00:02:23.743 DPDK 22.11.4 00:02:23.743 00:02:23.743 User defined options 00:02:23.743 libdir : lib 00:02:23.743 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:23.743 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:23.743 c_link_args : 00:02:23.743 enable_docs : false 00:02:23.743 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:23.743 enable_kmods : false 00:02:23.743 machine : native 00:02:23.743 tests : false 00:02:23.743 00:02:23.743 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:23.743 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:24.002 04:18:23 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:24.002 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:24.002 [1/740] Generating lib/rte_telemetry_mingw with a custom command 00:02:24.002 [2/740] Generating lib/rte_telemetry_def with a custom command 00:02:24.002 [3/740] Generating lib/rte_kvargs_def with a custom command 00:02:24.002 [4/740] Generating lib/rte_kvargs_mingw with a custom command 00:02:24.002 [5/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:24.262 [6/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:24.262 [7/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:24.262 [8/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:24.262 [9/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:24.262 [10/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:24.262 [11/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:24.262 [12/740] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:24.262 [13/740] Linking static target lib/librte_kvargs.a 00:02:24.262 [14/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:24.262 [15/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:24.262 [16/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:24.262 [17/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:24.262 [18/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:24.262 [19/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:24.262 [20/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:24.262 [21/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:24.521 [22/740] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.521 [23/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:24.521 [24/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:24.521 [25/740] Linking target lib/librte_kvargs.so.23.0 00:02:24.521 [26/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:24.521 [27/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:24.521 [28/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:24.521 [29/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:24.521 [30/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:24.521 [31/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:24.521 [32/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:24.521 [33/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:24.521 [34/740] Linking static target lib/librte_telemetry.a 00:02:24.521 [35/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:24.780 [36/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:24.780 [37/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:24.780 [38/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:24.780 [39/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:24.780 [40/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:24.780 [41/740] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:24.780 [42/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:24.780 [43/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:25.040 [44/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:25.040 [45/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:25.040 [46/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:25.040 [47/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:25.040 [48/740] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.040 [49/740] Linking target lib/librte_telemetry.so.23.0 00:02:25.040 [50/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:25.040 [51/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:25.040 [52/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:25.040 [53/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:25.040 [54/740] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:25.040 [55/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:25.040 [56/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:25.040 [57/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:25.040 [58/740] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:25.040 [59/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:25.040 [60/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:25.040 [61/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:25.040 [62/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:25.040 [63/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:25.040 [64/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:25.040 [65/740] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:25.040 [66/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:25.348 [67/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:25.348 [68/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:25.348 [69/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:25.348 [70/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:25.348 [71/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:25.348 [72/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:25.348 [73/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:25.348 [74/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:25.348 [75/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:25.348 [76/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:25.348 [77/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:25.348 [78/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:25.348 [79/740] Generating lib/rte_eal_def with a custom command 00:02:25.348 [80/740] Generating lib/rte_eal_mingw with a custom command 00:02:25.348 [81/740] Generating lib/rte_ring_def with a custom command 00:02:25.348 [82/740] Generating lib/rte_ring_mingw with a custom command 00:02:25.348 [83/740] Generating lib/rte_rcu_def with a custom command 00:02:25.348 [84/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:25.348 [85/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:25.348 [86/740] Generating lib/rte_rcu_mingw with a custom command 00:02:25.348 [87/740] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:25.348 [88/740] Linking static target lib/librte_ring.a 00:02:25.607 [89/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:25.607 [90/740] Generating lib/rte_mempool_def with a custom command 00:02:25.607 [91/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:25.607 [92/740] Generating lib/rte_mempool_mingw with a custom command 00:02:25.607 [93/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:25.607 [94/740] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.866 [95/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:25.866 [96/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:25.866 [97/740] Generating lib/rte_mbuf_def with a custom command 00:02:25.866 [98/740] Linking static target lib/librte_eal.a 00:02:25.867 [99/740] Generating lib/rte_mbuf_mingw with a custom command 00:02:25.867 [100/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:25.867 [101/740] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:25.867 [102/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:25.867 [103/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:26.126 [104/740] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:26.126 [105/740] Linking static target lib/librte_rcu.a 00:02:26.126 [106/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:26.126 [107/740] Linking static target lib/librte_mempool.a 00:02:26.126 [108/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:26.126 [109/740] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:26.126 [110/740] Generating lib/rte_net_def with a custom command 00:02:26.126 [111/740] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:26.126 [112/740] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:26.126 [113/740] Generating lib/rte_net_mingw with a custom command 00:02:26.385 [114/740] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:26.385 [115/740] Generating lib/rte_meter_def with a custom command 00:02:26.385 [116/740] Generating lib/rte_meter_mingw with a custom command 00:02:26.385 [117/740] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.385 [118/740] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:26.385 [119/740] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:26.385 [120/740] Linking static target lib/librte_meter.a 00:02:26.385 [121/740] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:26.385 [122/740] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.645 [123/740] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:26.645 [124/740] Linking static target lib/librte_net.a 00:02:26.645 [125/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:26.645 [126/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:26.645 [127/740] Linking static target lib/librte_mbuf.a 00:02:26.645 [128/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:26.645 [129/740] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.645 [130/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:26.645 [131/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:26.904 [132/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:26.904 [133/740] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.904 [134/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:27.163 [135/740] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.163 [136/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:27.163 [137/740] Generating lib/rte_ethdev_def with a custom command 00:02:27.163 [138/740] Generating lib/rte_ethdev_mingw with a custom command 00:02:27.163 [139/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:27.163 [140/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:27.163 [141/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:27.163 [142/740] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:27.163 [143/740] Generating lib/rte_pci_def with a custom command 00:02:27.163 [144/740] Linking static target lib/librte_pci.a 00:02:27.163 [145/740] Generating lib/rte_pci_mingw with a custom command 00:02:27.163 [146/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:27.422 [147/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:27.422 [148/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:27.422 [149/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:27.422 [150/740] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.422 [151/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:27.422 [152/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:27.422 [153/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:27.422 [154/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:27.422 [155/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:27.422 [156/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:27.681 [157/740] Generating lib/rte_cmdline_def with a custom command 00:02:27.681 [158/740] Generating lib/rte_cmdline_mingw with a custom command 00:02:27.681 [159/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:27.681 [160/740] Generating lib/rte_metrics_def with a custom command 00:02:27.681 [161/740] Generating lib/rte_metrics_mingw with a custom command 00:02:27.681 [162/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:27.681 [163/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:27.681 [164/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:27.681 [165/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:27.681 [166/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:27.681 [167/740] Generating lib/rte_hash_def with a custom command 00:02:27.681 [168/740] Linking static target lib/librte_cmdline.a 00:02:27.681 [169/740] Generating lib/rte_hash_mingw with a custom command 00:02:27.681 [170/740] Generating lib/rte_timer_def with a custom command 00:02:27.681 [171/740] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:27.681 [172/740] Generating lib/rte_timer_mingw with a custom command 00:02:27.941 [173/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:27.941 [174/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:27.941 [175/740] Linking static target lib/librte_metrics.a 00:02:28.200 [176/740] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:28.200 [177/740] Linking static target lib/librte_timer.a 00:02:28.200 [178/740] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.200 [179/740] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:28.459 [180/740] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:28.459 [181/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:28.459 [182/740] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.459 [183/740] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.459 [184/740] Generating lib/rte_acl_def with a custom command 00:02:28.459 [185/740] Generating lib/rte_acl_mingw with a custom command 00:02:28.459 [186/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:28.718 [187/740] Linking static target lib/librte_ethdev.a 00:02:28.718 [188/740] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:28.718 [189/740] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:28.718 [190/740] Generating lib/rte_bbdev_def with a custom command 00:02:28.718 [191/740] Generating lib/rte_bbdev_mingw with a custom command 00:02:28.718 [192/740] Generating lib/rte_bitratestats_def with a custom command 00:02:28.718 [193/740] Generating lib/rte_bitratestats_mingw with a custom command 00:02:28.977 [194/740] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:28.977 [195/740] Linking static target lib/librte_bitratestats.a 00:02:28.977 [196/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:29.237 [197/740] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:29.237 [198/740] Linking static target lib/librte_bbdev.a 00:02:29.237 [199/740] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:29.237 [200/740] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.502 [201/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:29.785 [202/740] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.785 [203/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:29.785 [204/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:29.785 [205/740] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:29.785 [206/740] Linking static target lib/librte_hash.a 00:02:30.075 [207/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:30.075 [208/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:30.075 [209/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:30.075 [210/740] Generating lib/rte_bpf_def with a custom command 00:02:30.342 [211/740] Generating lib/rte_bpf_mingw with a custom command 00:02:30.342 [212/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:30.342 [213/740] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.342 [214/740] Generating lib/rte_cfgfile_def with a custom command 00:02:30.342 [215/740] Generating lib/rte_cfgfile_mingw with a custom command 00:02:30.342 [216/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:30.342 [217/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:30.342 [218/740] Generating lib/rte_compressdev_def with a custom command 00:02:30.342 [219/740] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:30.342 [220/740] Linking static target lib/librte_cfgfile.a 00:02:30.342 [221/740] Generating lib/rte_compressdev_mingw with a custom command 00:02:30.602 [222/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:30.602 [223/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:30.602 [224/740] Linking static target lib/librte_bpf.a 00:02:30.602 [225/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:30.602 [226/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:30.861 [227/740] Linking static target lib/librte_acl.a 00:02:30.861 [228/740] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.861 [229/740] Generating lib/rte_cryptodev_def with a custom command 00:02:30.861 [230/740] Generating lib/rte_cryptodev_mingw with a custom command 00:02:30.861 [231/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:30.861 [232/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:30.861 [233/740] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.861 [234/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:30.861 [235/740] Generating lib/rte_distributor_def with a custom command 00:02:30.861 [236/740] Linking static target lib/librte_compressdev.a 00:02:30.861 [237/740] Generating lib/rte_distributor_mingw with a custom command 00:02:30.861 [238/740] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.121 [239/740] Generating lib/rte_efd_def with a custom command 00:02:31.121 [240/740] Generating lib/rte_efd_mingw with a custom command 00:02:31.121 [241/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:31.121 [242/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:31.380 [243/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:31.380 [244/740] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.380 [245/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:31.380 [246/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:31.380 [247/740] Linking static target lib/librte_distributor.a 00:02:31.380 [248/740] Linking target lib/librte_eal.so.23.0 00:02:31.380 [249/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:31.640 [250/740] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.640 [251/740] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:31.640 [252/740] Linking target lib/librte_ring.so.23.0 00:02:31.640 [253/740] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.640 [254/740] Linking target lib/librte_meter.so.23.0 00:02:31.640 [255/740] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:31.640 [256/740] Linking target lib/librte_rcu.so.23.0 00:02:31.640 [257/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:31.899 [258/740] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:31.899 [259/740] Linking target lib/librte_mempool.so.23.0 00:02:31.899 [260/740] Linking target lib/librte_pci.so.23.0 00:02:31.899 [261/740] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:31.899 [262/740] Linking target lib/librte_timer.so.23.0 00:02:31.899 [263/740] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:31.899 [264/740] Linking target lib/librte_mbuf.so.23.0 00:02:31.899 [265/740] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:31.899 [266/740] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:31.899 [267/740] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:31.899 [268/740] Linking static target lib/librte_efd.a 00:02:31.899 [269/740] Linking target lib/librte_acl.so.23.0 00:02:31.899 [270/740] Linking target lib/librte_cfgfile.so.23.0 00:02:31.899 [271/740] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:32.159 [272/740] Linking target lib/librte_net.so.23.0 00:02:32.159 [273/740] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:32.159 [274/740] Linking target lib/librte_bbdev.so.23.0 00:02:32.159 [275/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:32.159 [276/740] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:32.159 [277/740] Linking target lib/librte_compressdev.so.23.0 00:02:32.159 [278/740] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.159 [279/740] Linking target lib/librte_distributor.so.23.0 00:02:32.159 [280/740] Generating lib/rte_eventdev_def with a custom command 00:02:32.159 [281/740] Linking target lib/librte_cmdline.so.23.0 00:02:32.159 [282/740] Linking target lib/librte_hash.so.23.0 00:02:32.159 [283/740] Generating lib/rte_eventdev_mingw with a custom command 00:02:32.159 [284/740] Generating lib/rte_gpudev_def with a custom command 00:02:32.159 [285/740] Generating lib/rte_gpudev_mingw with a custom command 00:02:32.159 [286/740] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:32.418 [287/740] Linking target lib/librte_efd.so.23.0 00:02:32.418 [288/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:32.418 [289/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:32.418 [290/740] Linking static target lib/librte_cryptodev.a 00:02:32.418 [291/740] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.418 [292/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:32.418 [293/740] Linking target lib/librte_ethdev.so.23.0 00:02:32.678 [294/740] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:32.678 [295/740] Linking target lib/librte_metrics.so.23.0 00:02:32.678 [296/740] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:32.678 [297/740] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:32.678 [298/740] Generating lib/rte_gro_def with a custom command 00:02:32.678 [299/740] Generating lib/rte_gro_mingw with a custom command 00:02:32.678 [300/740] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:32.678 [301/740] Linking target lib/librte_bpf.so.23.0 00:02:32.678 [302/740] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:32.678 [303/740] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:32.678 [304/740] Linking static target lib/librte_gpudev.a 00:02:32.678 [305/740] Linking target lib/librte_bitratestats.so.23.0 00:02:32.937 [306/740] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:32.937 [307/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:32.937 [308/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:32.937 [309/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:32.937 [310/740] Linking static target lib/librte_gro.a 00:02:33.196 [311/740] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:33.196 [312/740] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:33.196 [313/740] Generating lib/rte_gso_def with a custom command 00:02:33.196 [314/740] Generating lib/rte_gso_mingw with a custom command 00:02:33.196 [315/740] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.196 [316/740] Linking target lib/librte_gro.so.23.0 00:02:33.196 [317/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:33.196 [318/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:33.196 [319/740] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:33.196 [320/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:33.456 [321/740] Linking static target lib/librte_eventdev.a 00:02:33.456 [322/740] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:33.456 [323/740] Linking static target lib/librte_gso.a 00:02:33.456 [324/740] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.456 [325/740] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.456 [326/740] Linking target lib/librte_gpudev.so.23.0 00:02:33.456 [327/740] Linking target lib/librte_gso.so.23.0 00:02:33.456 [328/740] Generating lib/rte_ip_frag_def with a custom command 00:02:33.456 [329/740] Generating lib/rte_ip_frag_mingw with a custom command 00:02:33.456 [330/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:33.456 [331/740] Generating lib/rte_jobstats_def with a custom command 00:02:33.715 [332/740] Generating lib/rte_jobstats_mingw with a custom command 00:02:33.715 [333/740] Generating lib/rte_latencystats_def with a custom command 00:02:33.715 [334/740] Generating lib/rte_latencystats_mingw with a custom command 00:02:33.715 [335/740] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:33.715 [336/740] Linking static target lib/librte_jobstats.a 00:02:33.715 [337/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:33.715 [338/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:33.715 [339/740] Generating lib/rte_lpm_def with a custom command 00:02:33.715 [340/740] Generating lib/rte_lpm_mingw with a custom command 00:02:33.715 [341/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:33.715 [342/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:33.715 [343/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:33.715 [344/740] Linking static target lib/librte_ip_frag.a 00:02:33.974 [345/740] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.974 [346/740] Linking target lib/librte_jobstats.so.23.0 00:02:33.974 [347/740] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:33.974 [348/740] Linking static target lib/librte_latencystats.a 00:02:33.974 [349/740] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.233 [350/740] Linking target lib/librte_ip_frag.so.23.0 00:02:34.233 [351/740] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.233 [352/740] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:34.233 [353/740] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:34.233 [354/740] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:34.233 [355/740] Linking target lib/librte_cryptodev.so.23.0 00:02:34.233 [356/740] Generating lib/rte_member_def with a custom command 00:02:34.233 [357/740] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:34.233 [358/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:34.233 [359/740] Generating lib/rte_member_mingw with a custom command 00:02:34.233 [360/740] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.233 [361/740] Generating lib/rte_pcapng_def with a custom command 00:02:34.233 [362/740] Generating lib/rte_pcapng_mingw with a custom command 00:02:34.233 [363/740] Linking target lib/librte_latencystats.so.23.0 00:02:34.233 [364/740] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:34.233 [365/740] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:34.233 [366/740] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:34.493 [367/740] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:34.493 [368/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:34.493 [369/740] Linking static target lib/librte_lpm.a 00:02:34.493 [370/740] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:34.752 [371/740] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:34.752 [372/740] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:34.752 [373/740] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:34.752 [374/740] Generating lib/rte_power_def with a custom command 00:02:34.752 [375/740] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:34.752 [376/740] Generating lib/rte_power_mingw with a custom command 00:02:34.752 [377/740] Generating lib/rte_rawdev_def with a custom command 00:02:34.752 [378/740] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:34.752 [379/740] Generating lib/rte_rawdev_mingw with a custom command 00:02:34.752 [380/740] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.752 [381/740] Generating lib/rte_regexdev_def with a custom command 00:02:34.752 [382/740] Linking target lib/librte_lpm.so.23.0 00:02:34.752 [383/740] Generating lib/rte_regexdev_mingw with a custom command 00:02:34.752 [384/740] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:34.752 [385/740] Linking static target lib/librte_pcapng.a 00:02:35.012 [386/740] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:35.012 [387/740] Generating lib/rte_dmadev_def with a custom command 00:02:35.012 [388/740] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:35.012 [389/740] Generating lib/rte_dmadev_mingw with a custom command 00:02:35.012 [390/740] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.012 [391/740] Linking target lib/librte_eventdev.so.23.0 00:02:35.012 [392/740] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:35.013 [393/740] Linking static target lib/librte_rawdev.a 00:02:35.013 [394/740] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:35.013 [395/740] Generating lib/rte_rib_def with a custom command 00:02:35.013 [396/740] Generating lib/rte_rib_mingw with a custom command 00:02:35.013 [397/740] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:35.013 [398/740] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.013 [399/740] Generating lib/rte_reorder_def with a custom command 00:02:35.013 [400/740] Linking target lib/librte_pcapng.so.23.0 00:02:35.286 [401/740] Generating lib/rte_reorder_mingw with a custom command 00:02:35.286 [402/740] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:35.286 [403/740] Linking static target lib/librte_dmadev.a 00:02:35.286 [404/740] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:35.287 [405/740] Linking static target lib/librte_power.a 00:02:35.287 [406/740] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:35.287 [407/740] Linking static target lib/librte_regexdev.a 00:02:35.287 [408/740] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:35.287 [409/740] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:35.552 [410/740] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.552 [411/740] Linking target lib/librte_rawdev.so.23.0 00:02:35.552 [412/740] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:35.552 [413/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:35.552 [414/740] Generating lib/rte_sched_def with a custom command 00:02:35.552 [415/740] Generating lib/rte_sched_mingw with a custom command 00:02:35.552 [416/740] Generating lib/rte_security_def with a custom command 00:02:35.552 [417/740] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:35.552 [418/740] Generating lib/rte_security_mingw with a custom command 00:02:35.552 [419/740] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:35.552 [420/740] Linking static target lib/librte_member.a 00:02:35.552 [421/740] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:35.552 [422/740] Linking static target lib/librte_reorder.a 00:02:35.552 [423/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:35.552 [424/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:35.552 [425/740] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.552 [426/740] Generating lib/rte_stack_def with a custom command 00:02:35.552 [427/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:35.552 [428/740] Linking target lib/librte_dmadev.so.23.0 00:02:35.552 [429/740] Linking static target lib/librte_rib.a 00:02:35.812 [430/740] Generating lib/rte_stack_mingw with a custom command 00:02:35.812 [431/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:35.812 [432/740] Linking static target lib/librte_stack.a 00:02:35.812 [433/740] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:35.812 [434/740] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:35.812 [435/740] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.812 [436/740] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.812 [437/740] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.812 [438/740] Linking target lib/librte_stack.so.23.0 00:02:35.812 [439/740] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.812 [440/740] Linking target lib/librte_reorder.so.23.0 00:02:35.812 [441/740] Linking target lib/librte_regexdev.so.23.0 00:02:35.812 [442/740] Linking target lib/librte_member.so.23.0 00:02:35.812 [443/740] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.072 [444/740] Linking target lib/librte_power.so.23.0 00:02:36.072 [445/740] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:36.072 [446/740] Linking static target lib/librte_security.a 00:02:36.072 [447/740] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.072 [448/740] Linking target lib/librte_rib.so.23.0 00:02:36.332 [449/740] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:36.332 [450/740] Generating lib/rte_vhost_def with a custom command 00:02:36.332 [451/740] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:36.332 [452/740] Generating lib/rte_vhost_mingw with a custom command 00:02:36.332 [453/740] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:36.332 [454/740] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.332 [455/740] Linking target lib/librte_security.so.23.0 00:02:36.332 [456/740] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:36.591 [457/740] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:36.591 [458/740] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:36.591 [459/740] Linking static target lib/librte_sched.a 00:02:36.851 [460/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:36.851 [461/740] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.851 [462/740] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:36.851 [463/740] Linking target lib/librte_sched.so.23.0 00:02:36.851 [464/740] Generating lib/rte_ipsec_def with a custom command 00:02:36.851 [465/740] Generating lib/rte_ipsec_mingw with a custom command 00:02:36.851 [466/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:36.851 [467/740] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:37.110 [468/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:37.110 [469/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:37.110 [470/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:37.110 [471/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:37.110 [472/740] Generating lib/rte_fib_def with a custom command 00:02:37.110 [473/740] Generating lib/rte_fib_mingw with a custom command 00:02:37.397 [474/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:37.397 [475/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:37.665 [476/740] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:37.665 [477/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:37.665 [478/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:37.665 [479/740] Linking static target lib/librte_ipsec.a 00:02:37.665 [480/740] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:37.925 [481/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:37.925 [482/740] Linking static target lib/librte_fib.a 00:02:37.925 [483/740] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:37.925 [484/740] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:37.925 [485/740] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.925 [486/740] Linking target lib/librte_ipsec.so.23.0 00:02:37.925 [487/740] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:38.184 [488/740] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:38.184 [489/740] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:38.184 [490/740] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.184 [491/740] Linking target lib/librte_fib.so.23.0 00:02:38.753 [492/740] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:38.753 [493/740] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:38.753 [494/740] Generating lib/rte_port_def with a custom command 00:02:38.753 [495/740] Generating lib/rte_port_mingw with a custom command 00:02:38.753 [496/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:38.753 [497/740] Generating lib/rte_pdump_def with a custom command 00:02:38.753 [498/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:38.753 [499/740] Generating lib/rte_pdump_mingw with a custom command 00:02:38.753 [500/740] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:38.753 [501/740] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:38.753 [502/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:38.753 [503/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:39.012 [504/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:39.012 [505/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:39.271 [506/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:39.271 [507/740] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:39.271 [508/740] Linking static target lib/librte_port.a 00:02:39.271 [509/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:39.271 [510/740] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:39.271 [511/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:39.271 [512/740] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:39.530 [513/740] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:39.530 [514/740] Linking static target lib/librte_pdump.a 00:02:39.789 [515/740] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.789 [516/740] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.789 [517/740] Linking target lib/librte_port.so.23.0 00:02:39.789 [518/740] Linking target lib/librte_pdump.so.23.0 00:02:39.789 [519/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:39.789 [520/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:39.789 [521/740] Generating lib/rte_table_def with a custom command 00:02:39.789 [522/740] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:39.789 [523/740] Generating lib/rte_table_mingw with a custom command 00:02:40.048 [524/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:40.048 [525/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:40.048 [526/740] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:40.048 [527/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:40.048 [528/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:40.048 [529/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:40.308 [530/740] Generating lib/rte_pipeline_def with a custom command 00:02:40.308 [531/740] Linking static target lib/librte_table.a 00:02:40.308 [532/740] Generating lib/rte_pipeline_mingw with a custom command 00:02:40.308 [533/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:40.567 [534/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:40.567 [535/740] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:40.567 [536/740] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.567 [537/740] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:40.567 [538/740] Linking target lib/librte_table.so.23.0 00:02:40.826 [539/740] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:40.826 [540/740] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:40.826 [541/740] Generating lib/rte_graph_def with a custom command 00:02:40.826 [542/740] Generating lib/rte_graph_mingw with a custom command 00:02:41.086 [543/740] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:41.086 [544/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:41.086 [545/740] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:41.086 [546/740] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:41.086 [547/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:41.086 [548/740] Linking static target lib/librte_graph.a 00:02:41.346 [549/740] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:41.346 [550/740] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:41.605 [551/740] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:41.605 [552/740] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:41.605 [553/740] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:41.605 [554/740] Generating lib/rte_node_def with a custom command 00:02:41.605 [555/740] Generating lib/rte_node_mingw with a custom command 00:02:41.605 [556/740] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.605 [557/740] Linking target lib/librte_graph.so.23.0 00:02:41.864 [558/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:41.864 [559/740] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:41.864 [560/740] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:41.864 [561/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:41.864 [562/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:41.864 [563/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:41.864 [564/740] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:41.864 [565/740] Generating drivers/rte_bus_pci_def with a custom command 00:02:41.864 [566/740] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:42.123 [567/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:42.123 [568/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:42.123 [569/740] Generating drivers/rte_bus_vdev_def with a custom command 00:02:42.123 [570/740] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:42.123 [571/740] Generating drivers/rte_mempool_ring_def with a custom command 00:02:42.123 [572/740] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:42.123 [573/740] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:42.123 [574/740] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:42.123 [575/740] Linking static target lib/librte_node.a 00:02:42.123 [576/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:42.123 [577/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:42.123 [578/740] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:42.124 [579/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:42.124 [580/740] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:42.381 [581/740] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.381 [582/740] Linking target lib/librte_node.so.23.0 00:02:42.381 [583/740] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:42.381 [584/740] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:42.381 [585/740] Linking static target drivers/librte_bus_vdev.a 00:02:42.381 [586/740] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:42.381 [587/740] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:42.381 [588/740] Linking static target drivers/librte_bus_pci.a 00:02:42.640 [589/740] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.640 [590/740] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:42.640 [591/740] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:42.640 [592/740] Linking target drivers/librte_bus_vdev.so.23.0 00:02:42.640 [593/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:42.640 [594/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:42.640 [595/740] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.640 [596/740] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:42.899 [597/740] Linking target drivers/librte_bus_pci.so.23.0 00:02:42.899 [598/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:42.899 [599/740] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:42.899 [600/740] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:42.899 [601/740] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:42.899 [602/740] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:43.158 [603/740] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:43.158 [604/740] Linking static target drivers/librte_mempool_ring.a 00:02:43.158 [605/740] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:43.158 [606/740] Linking target drivers/librte_mempool_ring.so.23.0 00:02:43.158 [607/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:43.418 [608/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:43.418 [609/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:43.676 [610/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:43.676 [611/740] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:43.935 [612/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:44.194 [613/740] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:44.194 [614/740] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:44.454 [615/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:44.454 [616/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:44.713 [617/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:44.713 [618/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:44.713 [619/740] Generating drivers/rte_net_i40e_def with a custom command 00:02:44.713 [620/740] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:44.713 [621/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:44.972 [622/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:45.234 [623/740] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:45.803 [624/740] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:45.803 [625/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:45.803 [626/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:45.803 [627/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:45.803 [628/740] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:45.803 [629/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:45.803 [630/740] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:45.803 [631/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:45.803 [632/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:46.061 [633/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:46.320 [634/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:46.579 [635/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:46.579 [636/740] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:46.579 [637/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:46.579 [638/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:46.839 [639/740] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:46.839 [640/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:46.839 [641/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:46.839 [642/740] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:46.839 [643/740] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:46.839 [644/740] Linking static target drivers/librte_net_i40e.a 00:02:46.839 [645/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:46.839 [646/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:47.098 [647/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:47.098 [648/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:47.098 [649/740] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.358 [650/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:47.358 [651/740] Linking target drivers/librte_net_i40e.so.23.0 00:02:47.358 [652/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:47.618 [653/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:47.618 [654/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:47.618 [655/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:47.618 [656/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:47.618 [657/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:47.618 [658/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:47.877 [659/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:47.877 [660/740] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:47.877 [661/740] Linking static target lib/librte_vhost.a 00:02:47.877 [662/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:47.877 [663/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:47.877 [664/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:48.137 [665/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:48.397 [666/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:48.397 [667/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:48.397 [668/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:48.966 [669/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:48.966 [670/740] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.966 [671/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:48.966 [672/740] Linking target lib/librte_vhost.so.23.0 00:02:48.966 [673/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:49.226 [674/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:49.226 [675/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:49.226 [676/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:49.226 [677/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:49.486 [678/740] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:49.486 [679/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:49.745 [680/740] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:49.745 [681/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:49.745 [682/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:49.745 [683/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:49.745 [684/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:49.745 [685/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:50.011 [686/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:50.011 [687/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:50.011 [688/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:50.011 [689/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:50.011 [690/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:50.270 [691/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:50.530 [692/740] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:50.530 [693/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:50.530 [694/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:50.789 [695/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:50.789 [696/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:51.049 [697/740] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:51.049 [698/740] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:51.049 [699/740] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:51.308 [700/740] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:51.308 [701/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:51.567 [702/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:51.826 [703/740] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:51.826 [704/740] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:51.826 [705/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:51.826 [706/740] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:51.826 [707/740] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:52.395 [708/740] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:52.395 [709/740] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:52.395 [710/740] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:52.654 [711/740] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:52.654 [712/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:52.654 [713/740] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:52.913 [714/740] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:52.913 [715/740] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:52.913 [716/740] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:53.178 [717/740] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:53.178 [718/740] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:53.453 [719/740] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:54.844 [720/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:54.844 [721/740] Linking static target lib/librte_pipeline.a 00:02:55.102 [722/740] Linking target app/dpdk-proc-info 00:02:55.102 [723/740] Linking target app/dpdk-test-bbdev 00:02:55.102 [724/740] Linking target app/dpdk-pdump 00:02:55.102 [725/740] Linking target app/dpdk-dumpcap 00:02:55.102 [726/740] Linking target app/dpdk-test-eventdev 00:02:55.102 [727/740] Linking target app/dpdk-test-crypto-perf 00:02:55.102 [728/740] Linking target app/dpdk-test-acl 00:02:55.102 [729/740] Linking target app/dpdk-test-cmdline 00:02:55.361 [730/740] Linking target app/dpdk-test-compress-perf 00:02:55.619 [731/740] Linking target app/dpdk-test-fib 00:02:55.620 [732/740] Linking target app/dpdk-test-gpudev 00:02:55.620 [733/740] Linking target app/dpdk-test-flow-perf 00:02:55.620 [734/740] Linking target app/dpdk-test-pipeline 00:02:55.620 [735/740] Linking target app/dpdk-test-sad 00:02:55.620 [736/740] Linking target app/dpdk-test-security-perf 00:02:55.620 [737/740] Linking target app/dpdk-testpmd 00:02:55.620 [738/740] Linking target app/dpdk-test-regex 00:02:59.814 [739/740] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.073 [740/740] Linking target lib/librte_pipeline.so.23.0 00:03:00.073 04:18:59 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:03:00.073 04:18:59 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:00.073 04:18:59 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:00.073 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:00.073 [0/1] Installing files. 00:03:00.334 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.338 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.339 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.339 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.339 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.339 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:00.339 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:00.339 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:00.339 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:00.339 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:00.339 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:00.339 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:00.339 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:00.339 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:00.339 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:00.339 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.339 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.339 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.339 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.339 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.339 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.339 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.339 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.339 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.339 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.339 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.339 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.339 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.339 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.339 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:00.599 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:00.599 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:00.599 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:00.599 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.599 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.599 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.599 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.599 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.599 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.599 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.599 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.599 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.861 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.861 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.861 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.861 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.861 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.861 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.861 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.861 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.861 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.861 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.861 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:00.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:00.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:00.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:00.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:00.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:00.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:00.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:00.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:00.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:00.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:00.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:00.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.862 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.863 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:00.864 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:00.864 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:00.864 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:00.864 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:00.864 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:00.864 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:00.864 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:00.864 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:00.864 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:00.864 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:00.864 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:00.864 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:00.864 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:00.864 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:00.864 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:00.864 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:00.864 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:00.864 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:00.864 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:00.864 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:00.864 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:00.864 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:00.864 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:00.864 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:00.864 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:00.864 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:00.864 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:00.864 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:00.864 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:00.864 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:00.864 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:00.864 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:00.864 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:00.864 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:00.864 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:00.864 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:00.864 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:00.864 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:00.864 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:00.864 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:00.864 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:00.864 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:00.864 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:00.864 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:00.864 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:00.864 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:00.864 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:00.864 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:00.864 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:00.864 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:00.864 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:00.864 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:00.864 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:00.864 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:00.865 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:00.865 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:00.865 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:00.865 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:00.865 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:00.865 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:00.865 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:00.865 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:00.865 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:00.865 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:00.865 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:00.865 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:00.865 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:00.865 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:00.865 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:00.865 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:00.865 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:00.865 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:00.865 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:00.865 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:00.865 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:00.865 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:00.865 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:00.865 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:00.865 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:00.865 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:00.865 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:00.865 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:00.865 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:00.865 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:00.865 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:00.865 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:00.865 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:00.865 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:00.865 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:00.865 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:00.865 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:00.865 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:00.865 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:00.865 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:00.865 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:00.865 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:00.865 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:00.865 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:00.865 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:00.865 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:00.865 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:00.865 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:00.865 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:00.865 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:00.865 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:00.865 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:00.865 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:00.865 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:00.865 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:00.865 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:00.865 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:00.865 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:00.865 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:00.865 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:00.865 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:00.865 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:00.865 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:00.865 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:00.865 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:00.865 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:00.865 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:00.865 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:00.865 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:00.865 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:00.865 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:00.865 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:00.865 04:19:00 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:03:00.865 04:19:00 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:00.865 00:03:00.865 real 0m44.062s 00:03:00.865 user 4m10.967s 00:03:00.865 sys 0m51.335s 00:03:00.865 04:19:00 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:00.865 04:19:00 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:00.865 ************************************ 00:03:00.865 END TEST build_native_dpdk 00:03:00.865 ************************************ 00:03:01.124 04:19:00 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:01.124 04:19:00 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:01.124 04:19:00 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:01.124 04:19:00 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:01.124 04:19:00 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:01.124 04:19:00 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:01.124 04:19:00 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:01.124 04:19:00 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:03:01.124 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:01.384 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.384 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:01.384 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:01.952 Using 'verbs' RDMA provider 00:03:17.799 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:35.899 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:35.899 Creating mk/config.mk...done. 00:03:35.899 Creating mk/cc.flags.mk...done. 00:03:35.899 Type 'make' to build. 00:03:35.899 04:19:34 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:35.899 04:19:34 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:35.899 04:19:34 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:35.899 04:19:34 -- common/autotest_common.sh@10 -- $ set +x 00:03:35.899 ************************************ 00:03:35.899 START TEST make 00:03:35.899 ************************************ 00:03:35.899 04:19:34 make -- common/autotest_common.sh@1129 -- $ make -j10 00:04:22.633 CC lib/ut_mock/mock.o 00:04:22.633 CC lib/ut/ut.o 00:04:22.633 CC lib/log/log.o 00:04:22.633 CC lib/log/log_flags.o 00:04:22.633 CC lib/log/log_deprecated.o 00:04:22.633 LIB libspdk_ut_mock.a 00:04:22.633 LIB libspdk_ut.a 00:04:22.633 LIB libspdk_log.a 00:04:22.633 SO libspdk_ut_mock.so.6.0 00:04:22.633 SO libspdk_ut.so.2.0 00:04:22.633 SO libspdk_log.so.7.1 00:04:22.633 SYMLINK libspdk_ut_mock.so 00:04:22.633 SYMLINK libspdk_ut.so 00:04:22.633 SYMLINK libspdk_log.so 00:04:22.633 CC lib/util/cpuset.o 00:04:22.633 CC lib/util/bit_array.o 00:04:22.633 CC lib/util/crc16.o 00:04:22.633 CC lib/util/base64.o 00:04:22.633 CC lib/util/crc32c.o 00:04:22.633 CC lib/util/crc32.o 00:04:22.633 CC lib/ioat/ioat.o 00:04:22.633 CXX lib/trace_parser/trace.o 00:04:22.633 CC lib/dma/dma.o 00:04:22.633 CC lib/vfio_user/host/vfio_user_pci.o 00:04:22.633 CC lib/util/crc32_ieee.o 00:04:22.633 CC lib/util/crc64.o 00:04:22.633 CC lib/vfio_user/host/vfio_user.o 00:04:22.633 CC lib/util/dif.o 00:04:22.633 CC lib/util/fd.o 00:04:22.633 LIB libspdk_dma.a 00:04:22.633 CC lib/util/fd_group.o 00:04:22.633 SO libspdk_dma.so.5.0 00:04:22.633 CC lib/util/file.o 00:04:22.633 CC lib/util/hexlify.o 00:04:22.633 LIB libspdk_ioat.a 00:04:22.633 SO libspdk_ioat.so.7.0 00:04:22.633 SYMLINK libspdk_dma.so 00:04:22.633 CC lib/util/iov.o 00:04:22.633 CC lib/util/math.o 00:04:22.633 CC lib/util/net.o 00:04:22.633 LIB libspdk_vfio_user.a 00:04:22.633 SYMLINK libspdk_ioat.so 00:04:22.633 CC lib/util/pipe.o 00:04:22.633 SO libspdk_vfio_user.so.5.0 00:04:22.633 CC lib/util/strerror_tls.o 00:04:22.633 CC lib/util/string.o 00:04:22.633 SYMLINK libspdk_vfio_user.so 00:04:22.633 CC lib/util/uuid.o 00:04:22.633 CC lib/util/xor.o 00:04:22.633 CC lib/util/zipf.o 00:04:22.633 CC lib/util/md5.o 00:04:22.633 LIB libspdk_util.a 00:04:22.633 LIB libspdk_trace_parser.a 00:04:22.633 SO libspdk_util.so.10.1 00:04:22.633 SO libspdk_trace_parser.so.6.0 00:04:22.633 SYMLINK libspdk_util.so 00:04:22.633 SYMLINK libspdk_trace_parser.so 00:04:22.633 CC lib/idxd/idxd.o 00:04:22.633 CC lib/idxd/idxd_user.o 00:04:22.633 CC lib/idxd/idxd_kernel.o 00:04:22.633 CC lib/json/json_parse.o 00:04:22.633 CC lib/vmd/vmd.o 00:04:22.633 CC lib/vmd/led.o 00:04:22.633 CC lib/env_dpdk/env.o 00:04:22.633 CC lib/conf/conf.o 00:04:22.633 CC lib/json/json_util.o 00:04:22.633 CC lib/rdma_utils/rdma_utils.o 00:04:22.633 CC lib/env_dpdk/memory.o 00:04:22.633 CC lib/env_dpdk/pci.o 00:04:22.633 LIB libspdk_conf.a 00:04:22.633 CC lib/json/json_write.o 00:04:22.633 SO libspdk_conf.so.6.0 00:04:22.633 CC lib/env_dpdk/init.o 00:04:22.633 CC lib/env_dpdk/threads.o 00:04:22.633 SYMLINK libspdk_conf.so 00:04:22.633 LIB libspdk_rdma_utils.a 00:04:22.633 CC lib/env_dpdk/pci_ioat.o 00:04:22.633 SO libspdk_rdma_utils.so.1.0 00:04:22.633 SYMLINK libspdk_rdma_utils.so 00:04:22.633 CC lib/env_dpdk/pci_virtio.o 00:04:22.633 CC lib/env_dpdk/pci_vmd.o 00:04:22.633 CC lib/env_dpdk/pci_idxd.o 00:04:22.633 CC lib/env_dpdk/pci_event.o 00:04:22.633 CC lib/env_dpdk/sigbus_handler.o 00:04:22.633 LIB libspdk_json.a 00:04:22.633 CC lib/env_dpdk/pci_dpdk.o 00:04:22.633 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:22.633 SO libspdk_json.so.6.0 00:04:22.633 SYMLINK libspdk_json.so 00:04:22.633 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:22.633 LIB libspdk_idxd.a 00:04:22.633 CC lib/rdma_provider/common.o 00:04:22.634 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:22.634 LIB libspdk_vmd.a 00:04:22.634 SO libspdk_idxd.so.12.1 00:04:22.634 SO libspdk_vmd.so.6.0 00:04:22.634 SYMLINK libspdk_idxd.so 00:04:22.634 SYMLINK libspdk_vmd.so 00:04:22.634 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:22.634 CC lib/jsonrpc/jsonrpc_server.o 00:04:22.634 CC lib/jsonrpc/jsonrpc_client.o 00:04:22.634 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:22.634 LIB libspdk_rdma_provider.a 00:04:22.634 SO libspdk_rdma_provider.so.7.0 00:04:22.634 SYMLINK libspdk_rdma_provider.so 00:04:22.634 LIB libspdk_jsonrpc.a 00:04:22.634 SO libspdk_jsonrpc.so.6.0 00:04:22.634 SYMLINK libspdk_jsonrpc.so 00:04:22.634 LIB libspdk_env_dpdk.a 00:04:22.634 SO libspdk_env_dpdk.so.15.1 00:04:22.634 CC lib/rpc/rpc.o 00:04:22.634 SYMLINK libspdk_env_dpdk.so 00:04:22.634 LIB libspdk_rpc.a 00:04:22.634 SO libspdk_rpc.so.6.0 00:04:22.634 SYMLINK libspdk_rpc.so 00:04:22.634 CC lib/notify/notify.o 00:04:22.634 CC lib/notify/notify_rpc.o 00:04:22.634 CC lib/trace/trace.o 00:04:22.634 CC lib/trace/trace_rpc.o 00:04:22.634 CC lib/trace/trace_flags.o 00:04:22.634 CC lib/keyring/keyring_rpc.o 00:04:22.634 CC lib/keyring/keyring.o 00:04:22.634 LIB libspdk_notify.a 00:04:22.634 SO libspdk_notify.so.6.0 00:04:22.634 LIB libspdk_keyring.a 00:04:22.634 SYMLINK libspdk_notify.so 00:04:22.634 LIB libspdk_trace.a 00:04:22.634 SO libspdk_keyring.so.2.0 00:04:22.634 SO libspdk_trace.so.11.0 00:04:22.634 SYMLINK libspdk_keyring.so 00:04:22.634 SYMLINK libspdk_trace.so 00:04:22.892 CC lib/thread/thread.o 00:04:22.892 CC lib/sock/sock.o 00:04:22.892 CC lib/thread/iobuf.o 00:04:22.892 CC lib/sock/sock_rpc.o 00:04:23.151 LIB libspdk_sock.a 00:04:23.410 SO libspdk_sock.so.10.0 00:04:23.410 SYMLINK libspdk_sock.so 00:04:23.976 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:23.976 CC lib/nvme/nvme_ctrlr.o 00:04:23.976 CC lib/nvme/nvme_fabric.o 00:04:23.976 CC lib/nvme/nvme_ns_cmd.o 00:04:23.976 CC lib/nvme/nvme_ns.o 00:04:23.976 CC lib/nvme/nvme_pcie_common.o 00:04:23.976 CC lib/nvme/nvme_pcie.o 00:04:23.976 CC lib/nvme/nvme.o 00:04:23.976 CC lib/nvme/nvme_qpair.o 00:04:24.542 CC lib/nvme/nvme_quirks.o 00:04:24.542 LIB libspdk_thread.a 00:04:24.542 CC lib/nvme/nvme_transport.o 00:04:24.542 SO libspdk_thread.so.11.0 00:04:24.542 CC lib/nvme/nvme_discovery.o 00:04:24.542 SYMLINK libspdk_thread.so 00:04:24.542 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:24.542 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:24.542 CC lib/nvme/nvme_tcp.o 00:04:24.800 CC lib/nvme/nvme_opal.o 00:04:24.800 CC lib/nvme/nvme_io_msg.o 00:04:24.800 CC lib/nvme/nvme_poll_group.o 00:04:24.800 CC lib/nvme/nvme_zns.o 00:04:25.059 CC lib/nvme/nvme_stubs.o 00:04:25.059 CC lib/nvme/nvme_auth.o 00:04:25.317 CC lib/accel/accel.o 00:04:25.317 CC lib/blob/blobstore.o 00:04:25.317 CC lib/init/json_config.o 00:04:25.574 CC lib/accel/accel_rpc.o 00:04:25.574 CC lib/virtio/virtio.o 00:04:25.574 CC lib/virtio/virtio_vhost_user.o 00:04:25.574 CC lib/init/subsystem.o 00:04:25.574 CC lib/virtio/virtio_vfio_user.o 00:04:25.574 CC lib/fsdev/fsdev.o 00:04:25.833 CC lib/init/subsystem_rpc.o 00:04:25.833 CC lib/fsdev/fsdev_io.o 00:04:25.833 CC lib/nvme/nvme_cuse.o 00:04:25.833 CC lib/virtio/virtio_pci.o 00:04:25.833 CC lib/init/rpc.o 00:04:26.091 CC lib/blob/request.o 00:04:26.091 LIB libspdk_init.a 00:04:26.091 CC lib/blob/zeroes.o 00:04:26.091 SO libspdk_init.so.6.0 00:04:26.091 CC lib/fsdev/fsdev_rpc.o 00:04:26.091 SYMLINK libspdk_init.so 00:04:26.091 CC lib/blob/blob_bs_dev.o 00:04:26.091 CC lib/accel/accel_sw.o 00:04:26.091 LIB libspdk_virtio.a 00:04:26.091 SO libspdk_virtio.so.7.0 00:04:26.091 CC lib/nvme/nvme_rdma.o 00:04:26.350 SYMLINK libspdk_virtio.so 00:04:26.350 LIB libspdk_fsdev.a 00:04:26.350 SO libspdk_fsdev.so.2.0 00:04:26.350 SYMLINK libspdk_fsdev.so 00:04:26.350 CC lib/event/reactor.o 00:04:26.350 CC lib/event/app.o 00:04:26.350 CC lib/event/log_rpc.o 00:04:26.350 CC lib/event/app_rpc.o 00:04:26.350 CC lib/event/scheduler_static.o 00:04:26.350 LIB libspdk_accel.a 00:04:26.608 SO libspdk_accel.so.16.0 00:04:26.608 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:26.608 SYMLINK libspdk_accel.so 00:04:26.867 CC lib/bdev/bdev.o 00:04:26.867 CC lib/bdev/bdev_rpc.o 00:04:26.867 CC lib/bdev/bdev_zone.o 00:04:26.867 CC lib/bdev/part.o 00:04:26.867 CC lib/bdev/scsi_nvme.o 00:04:26.867 LIB libspdk_event.a 00:04:26.867 SO libspdk_event.so.14.0 00:04:27.125 SYMLINK libspdk_event.so 00:04:27.125 LIB libspdk_fuse_dispatcher.a 00:04:27.125 SO libspdk_fuse_dispatcher.so.1.0 00:04:27.384 SYMLINK libspdk_fuse_dispatcher.so 00:04:27.643 LIB libspdk_nvme.a 00:04:27.901 SO libspdk_nvme.so.15.0 00:04:28.160 SYMLINK libspdk_nvme.so 00:04:28.729 LIB libspdk_blob.a 00:04:28.729 SO libspdk_blob.so.12.0 00:04:28.990 SYMLINK libspdk_blob.so 00:04:29.250 CC lib/lvol/lvol.o 00:04:29.250 CC lib/blobfs/blobfs.o 00:04:29.250 CC lib/blobfs/tree.o 00:04:29.820 LIB libspdk_bdev.a 00:04:29.820 SO libspdk_bdev.so.17.0 00:04:29.820 SYMLINK libspdk_bdev.so 00:04:30.079 LIB libspdk_blobfs.a 00:04:30.079 CC lib/ftl/ftl_init.o 00:04:30.079 CC lib/ftl/ftl_debug.o 00:04:30.079 CC lib/ftl/ftl_core.o 00:04:30.079 CC lib/ftl/ftl_layout.o 00:04:30.079 CC lib/nbd/nbd.o 00:04:30.079 SO libspdk_blobfs.so.11.0 00:04:30.339 CC lib/nvmf/ctrlr.o 00:04:30.339 CC lib/scsi/dev.o 00:04:30.339 CC lib/ublk/ublk.o 00:04:30.339 SYMLINK libspdk_blobfs.so 00:04:30.339 CC lib/ublk/ublk_rpc.o 00:04:30.339 LIB libspdk_lvol.a 00:04:30.339 SO libspdk_lvol.so.11.0 00:04:30.339 CC lib/scsi/lun.o 00:04:30.339 SYMLINK libspdk_lvol.so 00:04:30.339 CC lib/nbd/nbd_rpc.o 00:04:30.339 CC lib/nvmf/ctrlr_discovery.o 00:04:30.339 CC lib/scsi/port.o 00:04:30.339 CC lib/scsi/scsi.o 00:04:30.598 CC lib/nvmf/ctrlr_bdev.o 00:04:30.598 CC lib/ftl/ftl_io.o 00:04:30.598 CC lib/scsi/scsi_bdev.o 00:04:30.598 CC lib/scsi/scsi_pr.o 00:04:30.598 CC lib/nvmf/subsystem.o 00:04:30.598 LIB libspdk_nbd.a 00:04:30.598 SO libspdk_nbd.so.7.0 00:04:30.598 SYMLINK libspdk_nbd.so 00:04:30.598 CC lib/scsi/scsi_rpc.o 00:04:30.598 CC lib/scsi/task.o 00:04:30.858 CC lib/ftl/ftl_sb.o 00:04:30.858 CC lib/nvmf/nvmf.o 00:04:30.858 LIB libspdk_ublk.a 00:04:30.858 SO libspdk_ublk.so.3.0 00:04:30.858 CC lib/ftl/ftl_l2p.o 00:04:30.858 CC lib/nvmf/nvmf_rpc.o 00:04:30.858 CC lib/ftl/ftl_l2p_flat.o 00:04:31.118 SYMLINK libspdk_ublk.so 00:04:31.118 CC lib/nvmf/transport.o 00:04:31.118 CC lib/ftl/ftl_nv_cache.o 00:04:31.118 LIB libspdk_scsi.a 00:04:31.118 SO libspdk_scsi.so.9.0 00:04:31.118 CC lib/ftl/ftl_band.o 00:04:31.118 CC lib/ftl/ftl_band_ops.o 00:04:31.118 SYMLINK libspdk_scsi.so 00:04:31.118 CC lib/nvmf/tcp.o 00:04:31.378 CC lib/iscsi/conn.o 00:04:31.638 CC lib/ftl/ftl_writer.o 00:04:31.638 CC lib/vhost/vhost.o 00:04:31.638 CC lib/nvmf/stubs.o 00:04:31.638 CC lib/ftl/ftl_rq.o 00:04:31.898 CC lib/nvmf/mdns_server.o 00:04:31.898 CC lib/ftl/ftl_reloc.o 00:04:31.898 CC lib/ftl/ftl_l2p_cache.o 00:04:31.898 CC lib/ftl/ftl_p2l.o 00:04:31.898 CC lib/iscsi/init_grp.o 00:04:32.158 CC lib/vhost/vhost_rpc.o 00:04:32.158 CC lib/vhost/vhost_scsi.o 00:04:32.158 CC lib/vhost/vhost_blk.o 00:04:32.158 CC lib/vhost/rte_vhost_user.o 00:04:32.158 CC lib/iscsi/iscsi.o 00:04:32.158 CC lib/iscsi/param.o 00:04:32.158 CC lib/ftl/ftl_p2l_log.o 00:04:32.418 CC lib/ftl/mngt/ftl_mngt.o 00:04:32.418 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:32.418 CC lib/nvmf/rdma.o 00:04:32.418 CC lib/nvmf/auth.o 00:04:32.677 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:32.677 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:32.677 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:32.677 CC lib/iscsi/portal_grp.o 00:04:32.937 CC lib/iscsi/tgt_node.o 00:04:32.937 CC lib/iscsi/iscsi_subsystem.o 00:04:32.937 CC lib/iscsi/iscsi_rpc.o 00:04:32.937 CC lib/iscsi/task.o 00:04:32.937 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:32.937 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:33.197 LIB libspdk_vhost.a 00:04:33.197 SO libspdk_vhost.so.8.0 00:04:33.197 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:33.197 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:33.197 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:33.197 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:33.197 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:33.197 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:33.197 SYMLINK libspdk_vhost.so 00:04:33.197 CC lib/ftl/utils/ftl_conf.o 00:04:33.457 CC lib/ftl/utils/ftl_md.o 00:04:33.457 CC lib/ftl/utils/ftl_mempool.o 00:04:33.457 CC lib/ftl/utils/ftl_bitmap.o 00:04:33.457 CC lib/ftl/utils/ftl_property.o 00:04:33.457 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:33.457 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:33.457 LIB libspdk_iscsi.a 00:04:33.457 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:33.457 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:33.716 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:33.716 SO libspdk_iscsi.so.8.0 00:04:33.716 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:33.717 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:33.717 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:33.717 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:33.717 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:33.717 SYMLINK libspdk_iscsi.so 00:04:33.717 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:33.717 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:33.717 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:33.976 CC lib/ftl/base/ftl_base_dev.o 00:04:33.976 CC lib/ftl/base/ftl_base_bdev.o 00:04:33.976 CC lib/ftl/ftl_trace.o 00:04:33.976 LIB libspdk_ftl.a 00:04:34.236 SO libspdk_ftl.so.9.0 00:04:34.496 SYMLINK libspdk_ftl.so 00:04:34.756 LIB libspdk_nvmf.a 00:04:34.756 SO libspdk_nvmf.so.20.0 00:04:35.016 SYMLINK libspdk_nvmf.so 00:04:35.584 CC module/env_dpdk/env_dpdk_rpc.o 00:04:35.584 CC module/keyring/linux/keyring.o 00:04:35.584 CC module/sock/posix/posix.o 00:04:35.584 CC module/accel/error/accel_error.o 00:04:35.584 CC module/keyring/file/keyring.o 00:04:35.584 CC module/accel/ioat/accel_ioat.o 00:04:35.584 CC module/blob/bdev/blob_bdev.o 00:04:35.584 CC module/fsdev/aio/fsdev_aio.o 00:04:35.584 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:35.584 CC module/accel/dsa/accel_dsa.o 00:04:35.584 LIB libspdk_env_dpdk_rpc.a 00:04:35.584 SO libspdk_env_dpdk_rpc.so.6.0 00:04:35.584 SYMLINK libspdk_env_dpdk_rpc.so 00:04:35.584 CC module/accel/dsa/accel_dsa_rpc.o 00:04:35.584 CC module/keyring/file/keyring_rpc.o 00:04:35.584 CC module/keyring/linux/keyring_rpc.o 00:04:35.854 CC module/accel/ioat/accel_ioat_rpc.o 00:04:35.854 CC module/accel/error/accel_error_rpc.o 00:04:35.854 LIB libspdk_keyring_file.a 00:04:35.854 LIB libspdk_scheduler_dynamic.a 00:04:35.854 LIB libspdk_keyring_linux.a 00:04:35.854 LIB libspdk_blob_bdev.a 00:04:35.854 LIB libspdk_accel_ioat.a 00:04:35.854 SO libspdk_scheduler_dynamic.so.4.0 00:04:35.854 SO libspdk_keyring_file.so.2.0 00:04:35.854 SO libspdk_keyring_linux.so.1.0 00:04:35.854 SO libspdk_blob_bdev.so.12.0 00:04:35.854 SO libspdk_accel_ioat.so.6.0 00:04:35.854 LIB libspdk_accel_dsa.a 00:04:35.854 SYMLINK libspdk_keyring_linux.so 00:04:35.854 LIB libspdk_accel_error.a 00:04:35.854 SYMLINK libspdk_scheduler_dynamic.so 00:04:35.854 SYMLINK libspdk_keyring_file.so 00:04:35.854 SYMLINK libspdk_blob_bdev.so 00:04:35.854 SYMLINK libspdk_accel_ioat.so 00:04:35.854 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:35.854 SO libspdk_accel_dsa.so.5.0 00:04:35.854 CC module/fsdev/aio/linux_aio_mgr.o 00:04:35.854 SO libspdk_accel_error.so.2.0 00:04:36.132 SYMLINK libspdk_accel_dsa.so 00:04:36.132 SYMLINK libspdk_accel_error.so 00:04:36.132 CC module/accel/iaa/accel_iaa.o 00:04:36.132 CC module/accel/iaa/accel_iaa_rpc.o 00:04:36.132 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:36.132 CC module/scheduler/gscheduler/gscheduler.o 00:04:36.132 CC module/bdev/delay/vbdev_delay.o 00:04:36.132 LIB libspdk_accel_iaa.a 00:04:36.132 LIB libspdk_scheduler_dpdk_governor.a 00:04:36.132 CC module/bdev/error/vbdev_error.o 00:04:36.132 LIB libspdk_fsdev_aio.a 00:04:36.132 SO libspdk_accel_iaa.so.3.0 00:04:36.132 LIB libspdk_scheduler_gscheduler.a 00:04:36.132 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:36.415 CC module/blobfs/bdev/blobfs_bdev.o 00:04:36.415 SO libspdk_scheduler_gscheduler.so.4.0 00:04:36.415 SO libspdk_fsdev_aio.so.1.0 00:04:36.415 CC module/bdev/gpt/gpt.o 00:04:36.415 CC module/bdev/lvol/vbdev_lvol.o 00:04:36.415 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:36.415 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:36.415 SYMLINK libspdk_accel_iaa.so 00:04:36.415 LIB libspdk_sock_posix.a 00:04:36.415 CC module/bdev/error/vbdev_error_rpc.o 00:04:36.415 SYMLINK libspdk_scheduler_gscheduler.so 00:04:36.415 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:36.415 SYMLINK libspdk_fsdev_aio.so 00:04:36.415 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:36.415 SO libspdk_sock_posix.so.6.0 00:04:36.415 CC module/bdev/gpt/vbdev_gpt.o 00:04:36.415 SYMLINK libspdk_sock_posix.so 00:04:36.415 LIB libspdk_bdev_error.a 00:04:36.415 LIB libspdk_blobfs_bdev.a 00:04:36.415 SO libspdk_bdev_error.so.6.0 00:04:36.415 SO libspdk_blobfs_bdev.so.6.0 00:04:36.415 LIB libspdk_bdev_delay.a 00:04:36.702 SO libspdk_bdev_delay.so.6.0 00:04:36.702 SYMLINK libspdk_blobfs_bdev.so 00:04:36.702 SYMLINK libspdk_bdev_error.so 00:04:36.702 CC module/bdev/malloc/bdev_malloc.o 00:04:36.702 SYMLINK libspdk_bdev_delay.so 00:04:36.702 CC module/bdev/nvme/bdev_nvme.o 00:04:36.702 CC module/bdev/null/bdev_null.o 00:04:36.702 CC module/bdev/null/bdev_null_rpc.o 00:04:36.702 LIB libspdk_bdev_gpt.a 00:04:36.702 CC module/bdev/passthru/vbdev_passthru.o 00:04:36.702 SO libspdk_bdev_gpt.so.6.0 00:04:36.702 CC module/bdev/split/vbdev_split.o 00:04:36.702 CC module/bdev/raid/bdev_raid.o 00:04:36.702 SYMLINK libspdk_bdev_gpt.so 00:04:36.702 CC module/bdev/raid/bdev_raid_rpc.o 00:04:36.702 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:36.702 LIB libspdk_bdev_lvol.a 00:04:36.702 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:36.978 SO libspdk_bdev_lvol.so.6.0 00:04:36.978 SYMLINK libspdk_bdev_lvol.so 00:04:36.978 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:36.978 LIB libspdk_bdev_null.a 00:04:36.978 CC module/bdev/split/vbdev_split_rpc.o 00:04:36.978 SO libspdk_bdev_null.so.6.0 00:04:36.978 CC module/bdev/nvme/nvme_rpc.o 00:04:36.978 LIB libspdk_bdev_passthru.a 00:04:36.978 CC module/bdev/nvme/bdev_mdns_client.o 00:04:36.978 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:36.978 SO libspdk_bdev_passthru.so.6.0 00:04:36.978 SYMLINK libspdk_bdev_null.so 00:04:36.978 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:36.978 SYMLINK libspdk_bdev_passthru.so 00:04:36.978 CC module/bdev/nvme/vbdev_opal.o 00:04:37.238 LIB libspdk_bdev_split.a 00:04:37.238 LIB libspdk_bdev_malloc.a 00:04:37.238 SO libspdk_bdev_split.so.6.0 00:04:37.238 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:37.238 SO libspdk_bdev_malloc.so.6.0 00:04:37.238 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:37.238 SYMLINK libspdk_bdev_split.so 00:04:37.238 LIB libspdk_bdev_zone_block.a 00:04:37.238 SYMLINK libspdk_bdev_malloc.so 00:04:37.238 SO libspdk_bdev_zone_block.so.6.0 00:04:37.238 SYMLINK libspdk_bdev_zone_block.so 00:04:37.238 CC module/bdev/raid/bdev_raid_sb.o 00:04:37.238 CC module/bdev/raid/raid0.o 00:04:37.238 CC module/bdev/raid/raid1.o 00:04:37.498 CC module/bdev/aio/bdev_aio.o 00:04:37.498 CC module/bdev/ftl/bdev_ftl.o 00:04:37.498 CC module/bdev/iscsi/bdev_iscsi.o 00:04:37.498 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:37.498 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:37.498 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:37.498 CC module/bdev/aio/bdev_aio_rpc.o 00:04:37.498 CC module/bdev/raid/concat.o 00:04:37.757 CC module/bdev/raid/raid5f.o 00:04:37.757 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:37.757 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:37.757 LIB libspdk_bdev_aio.a 00:04:37.757 LIB libspdk_bdev_ftl.a 00:04:37.758 SO libspdk_bdev_aio.so.6.0 00:04:37.758 LIB libspdk_bdev_iscsi.a 00:04:37.758 SO libspdk_bdev_ftl.so.6.0 00:04:37.758 SO libspdk_bdev_iscsi.so.6.0 00:04:37.758 SYMLINK libspdk_bdev_aio.so 00:04:37.758 SYMLINK libspdk_bdev_ftl.so 00:04:37.758 SYMLINK libspdk_bdev_iscsi.so 00:04:38.017 LIB libspdk_bdev_virtio.a 00:04:38.017 SO libspdk_bdev_virtio.so.6.0 00:04:38.017 SYMLINK libspdk_bdev_virtio.so 00:04:38.277 LIB libspdk_bdev_raid.a 00:04:38.277 SO libspdk_bdev_raid.so.6.0 00:04:38.277 SYMLINK libspdk_bdev_raid.so 00:04:39.217 LIB libspdk_bdev_nvme.a 00:04:39.477 SO libspdk_bdev_nvme.so.7.1 00:04:39.477 SYMLINK libspdk_bdev_nvme.so 00:04:40.048 CC module/event/subsystems/vmd/vmd.o 00:04:40.048 CC module/event/subsystems/sock/sock.o 00:04:40.048 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:40.307 CC module/event/subsystems/keyring/keyring.o 00:04:40.307 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:40.307 CC module/event/subsystems/iobuf/iobuf.o 00:04:40.307 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:40.307 CC module/event/subsystems/fsdev/fsdev.o 00:04:40.307 CC module/event/subsystems/scheduler/scheduler.o 00:04:40.307 LIB libspdk_event_vhost_blk.a 00:04:40.307 LIB libspdk_event_keyring.a 00:04:40.307 LIB libspdk_event_fsdev.a 00:04:40.307 LIB libspdk_event_sock.a 00:04:40.307 LIB libspdk_event_vmd.a 00:04:40.307 LIB libspdk_event_scheduler.a 00:04:40.307 LIB libspdk_event_iobuf.a 00:04:40.307 SO libspdk_event_vhost_blk.so.3.0 00:04:40.307 SO libspdk_event_keyring.so.1.0 00:04:40.307 SO libspdk_event_fsdev.so.1.0 00:04:40.307 SO libspdk_event_sock.so.5.0 00:04:40.307 SO libspdk_event_scheduler.so.4.0 00:04:40.307 SO libspdk_event_vmd.so.6.0 00:04:40.307 SO libspdk_event_iobuf.so.3.0 00:04:40.307 SYMLINK libspdk_event_vhost_blk.so 00:04:40.307 SYMLINK libspdk_event_keyring.so 00:04:40.307 SYMLINK libspdk_event_fsdev.so 00:04:40.307 SYMLINK libspdk_event_sock.so 00:04:40.307 SYMLINK libspdk_event_scheduler.so 00:04:40.307 SYMLINK libspdk_event_vmd.so 00:04:40.307 SYMLINK libspdk_event_iobuf.so 00:04:40.875 CC module/event/subsystems/accel/accel.o 00:04:40.875 LIB libspdk_event_accel.a 00:04:41.133 SO libspdk_event_accel.so.6.0 00:04:41.133 SYMLINK libspdk_event_accel.so 00:04:41.701 CC module/event/subsystems/bdev/bdev.o 00:04:41.701 LIB libspdk_event_bdev.a 00:04:41.701 SO libspdk_event_bdev.so.6.0 00:04:41.960 SYMLINK libspdk_event_bdev.so 00:04:42.219 CC module/event/subsystems/scsi/scsi.o 00:04:42.219 CC module/event/subsystems/ublk/ublk.o 00:04:42.219 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:42.219 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:42.219 CC module/event/subsystems/nbd/nbd.o 00:04:42.477 LIB libspdk_event_scsi.a 00:04:42.478 LIB libspdk_event_ublk.a 00:04:42.478 SO libspdk_event_scsi.so.6.0 00:04:42.478 SO libspdk_event_ublk.so.3.0 00:04:42.478 LIB libspdk_event_nbd.a 00:04:42.478 SYMLINK libspdk_event_ublk.so 00:04:42.478 SO libspdk_event_nbd.so.6.0 00:04:42.478 SYMLINK libspdk_event_scsi.so 00:04:42.478 LIB libspdk_event_nvmf.a 00:04:42.478 SYMLINK libspdk_event_nbd.so 00:04:42.478 SO libspdk_event_nvmf.so.6.0 00:04:42.736 SYMLINK libspdk_event_nvmf.so 00:04:42.997 CC module/event/subsystems/iscsi/iscsi.o 00:04:42.997 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:42.997 LIB libspdk_event_iscsi.a 00:04:43.257 LIB libspdk_event_vhost_scsi.a 00:04:43.257 SO libspdk_event_iscsi.so.6.0 00:04:43.257 SO libspdk_event_vhost_scsi.so.3.0 00:04:43.257 SYMLINK libspdk_event_iscsi.so 00:04:43.257 SYMLINK libspdk_event_vhost_scsi.so 00:04:43.516 SO libspdk.so.6.0 00:04:43.516 SYMLINK libspdk.so 00:04:43.776 CXX app/trace/trace.o 00:04:43.776 CC app/trace_record/trace_record.o 00:04:43.776 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:43.776 CC app/iscsi_tgt/iscsi_tgt.o 00:04:43.776 CC app/nvmf_tgt/nvmf_main.o 00:04:43.776 CC examples/util/zipf/zipf.o 00:04:44.034 CC test/thread/poller_perf/poller_perf.o 00:04:44.034 CC examples/ioat/perf/perf.o 00:04:44.034 CC test/dma/test_dma/test_dma.o 00:04:44.034 CC test/app/bdev_svc/bdev_svc.o 00:04:44.034 LINK nvmf_tgt 00:04:44.034 LINK interrupt_tgt 00:04:44.034 LINK iscsi_tgt 00:04:44.034 LINK poller_perf 00:04:44.034 LINK zipf 00:04:44.034 LINK spdk_trace_record 00:04:44.293 LINK ioat_perf 00:04:44.293 LINK spdk_trace 00:04:44.293 LINK bdev_svc 00:04:44.293 CC test/app/histogram_perf/histogram_perf.o 00:04:44.293 CC test/app/jsoncat/jsoncat.o 00:04:44.293 CC test/app/stub/stub.o 00:04:44.293 TEST_HEADER include/spdk/accel.h 00:04:44.293 TEST_HEADER include/spdk/accel_module.h 00:04:44.293 TEST_HEADER include/spdk/assert.h 00:04:44.293 TEST_HEADER include/spdk/barrier.h 00:04:44.293 TEST_HEADER include/spdk/base64.h 00:04:44.293 TEST_HEADER include/spdk/bdev.h 00:04:44.293 TEST_HEADER include/spdk/bdev_module.h 00:04:44.293 TEST_HEADER include/spdk/bdev_zone.h 00:04:44.293 TEST_HEADER include/spdk/bit_array.h 00:04:44.293 TEST_HEADER include/spdk/bit_pool.h 00:04:44.293 TEST_HEADER include/spdk/blob_bdev.h 00:04:44.293 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:44.293 TEST_HEADER include/spdk/blobfs.h 00:04:44.293 TEST_HEADER include/spdk/blob.h 00:04:44.293 TEST_HEADER include/spdk/conf.h 00:04:44.293 TEST_HEADER include/spdk/config.h 00:04:44.293 TEST_HEADER include/spdk/cpuset.h 00:04:44.293 TEST_HEADER include/spdk/crc16.h 00:04:44.293 TEST_HEADER include/spdk/crc32.h 00:04:44.293 TEST_HEADER include/spdk/crc64.h 00:04:44.293 TEST_HEADER include/spdk/dif.h 00:04:44.293 TEST_HEADER include/spdk/dma.h 00:04:44.293 TEST_HEADER include/spdk/endian.h 00:04:44.293 TEST_HEADER include/spdk/env_dpdk.h 00:04:44.293 TEST_HEADER include/spdk/env.h 00:04:44.293 TEST_HEADER include/spdk/event.h 00:04:44.293 TEST_HEADER include/spdk/fd_group.h 00:04:44.293 TEST_HEADER include/spdk/fd.h 00:04:44.293 TEST_HEADER include/spdk/file.h 00:04:44.293 TEST_HEADER include/spdk/fsdev.h 00:04:44.293 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:44.293 TEST_HEADER include/spdk/fsdev_module.h 00:04:44.293 TEST_HEADER include/spdk/ftl.h 00:04:44.293 TEST_HEADER include/spdk/gpt_spec.h 00:04:44.293 TEST_HEADER include/spdk/hexlify.h 00:04:44.552 TEST_HEADER include/spdk/histogram_data.h 00:04:44.552 CC examples/ioat/verify/verify.o 00:04:44.552 TEST_HEADER include/spdk/idxd.h 00:04:44.552 TEST_HEADER include/spdk/idxd_spec.h 00:04:44.552 TEST_HEADER include/spdk/init.h 00:04:44.552 TEST_HEADER include/spdk/ioat.h 00:04:44.552 TEST_HEADER include/spdk/ioat_spec.h 00:04:44.553 TEST_HEADER include/spdk/iscsi_spec.h 00:04:44.553 TEST_HEADER include/spdk/json.h 00:04:44.553 TEST_HEADER include/spdk/jsonrpc.h 00:04:44.553 TEST_HEADER include/spdk/keyring.h 00:04:44.553 TEST_HEADER include/spdk/keyring_module.h 00:04:44.553 TEST_HEADER include/spdk/likely.h 00:04:44.553 TEST_HEADER include/spdk/log.h 00:04:44.553 TEST_HEADER include/spdk/lvol.h 00:04:44.553 TEST_HEADER include/spdk/md5.h 00:04:44.553 TEST_HEADER include/spdk/memory.h 00:04:44.553 TEST_HEADER include/spdk/mmio.h 00:04:44.553 TEST_HEADER include/spdk/nbd.h 00:04:44.553 TEST_HEADER include/spdk/net.h 00:04:44.553 TEST_HEADER include/spdk/notify.h 00:04:44.553 TEST_HEADER include/spdk/nvme.h 00:04:44.553 TEST_HEADER include/spdk/nvme_intel.h 00:04:44.553 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:44.553 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:44.553 TEST_HEADER include/spdk/nvme_spec.h 00:04:44.553 TEST_HEADER include/spdk/nvme_zns.h 00:04:44.553 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:44.553 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:44.553 TEST_HEADER include/spdk/nvmf.h 00:04:44.553 LINK test_dma 00:04:44.553 TEST_HEADER include/spdk/nvmf_spec.h 00:04:44.553 TEST_HEADER include/spdk/nvmf_transport.h 00:04:44.553 TEST_HEADER include/spdk/opal.h 00:04:44.553 TEST_HEADER include/spdk/opal_spec.h 00:04:44.553 TEST_HEADER include/spdk/pci_ids.h 00:04:44.553 LINK jsoncat 00:04:44.553 TEST_HEADER include/spdk/pipe.h 00:04:44.553 LINK histogram_perf 00:04:44.553 TEST_HEADER include/spdk/queue.h 00:04:44.553 TEST_HEADER include/spdk/reduce.h 00:04:44.553 TEST_HEADER include/spdk/rpc.h 00:04:44.553 TEST_HEADER include/spdk/scheduler.h 00:04:44.553 TEST_HEADER include/spdk/scsi.h 00:04:44.553 TEST_HEADER include/spdk/scsi_spec.h 00:04:44.553 TEST_HEADER include/spdk/sock.h 00:04:44.553 TEST_HEADER include/spdk/stdinc.h 00:04:44.553 TEST_HEADER include/spdk/string.h 00:04:44.553 TEST_HEADER include/spdk/thread.h 00:04:44.553 TEST_HEADER include/spdk/trace.h 00:04:44.553 TEST_HEADER include/spdk/trace_parser.h 00:04:44.553 CC test/env/mem_callbacks/mem_callbacks.o 00:04:44.553 TEST_HEADER include/spdk/tree.h 00:04:44.553 TEST_HEADER include/spdk/ublk.h 00:04:44.553 TEST_HEADER include/spdk/util.h 00:04:44.553 TEST_HEADER include/spdk/uuid.h 00:04:44.553 TEST_HEADER include/spdk/version.h 00:04:44.553 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:44.553 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:44.553 TEST_HEADER include/spdk/vhost.h 00:04:44.553 TEST_HEADER include/spdk/vmd.h 00:04:44.553 LINK stub 00:04:44.553 TEST_HEADER include/spdk/xor.h 00:04:44.553 TEST_HEADER include/spdk/zipf.h 00:04:44.553 CXX test/cpp_headers/accel.o 00:04:44.553 CC app/spdk_tgt/spdk_tgt.o 00:04:44.553 CC examples/thread/thread/thread_ex.o 00:04:44.553 LINK verify 00:04:44.812 CXX test/cpp_headers/accel_module.o 00:04:44.812 LINK mem_callbacks 00:04:44.812 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:44.812 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:44.812 LINK spdk_tgt 00:04:44.812 CXX test/cpp_headers/assert.o 00:04:44.812 CC test/env/vtophys/vtophys.o 00:04:44.812 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:44.812 LINK nvme_fuzz 00:04:44.812 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:44.812 LINK thread 00:04:44.812 CC app/spdk_lspci/spdk_lspci.o 00:04:45.071 CXX test/cpp_headers/barrier.o 00:04:45.071 LINK vtophys 00:04:45.071 CXX test/cpp_headers/base64.o 00:04:45.071 LINK env_dpdk_post_init 00:04:45.071 CXX test/cpp_headers/bdev.o 00:04:45.071 CC examples/sock/hello_world/hello_sock.o 00:04:45.071 LINK spdk_lspci 00:04:45.071 CXX test/cpp_headers/bdev_module.o 00:04:45.071 CXX test/cpp_headers/bdev_zone.o 00:04:45.338 CC test/env/memory/memory_ut.o 00:04:45.338 LINK vhost_fuzz 00:04:45.338 CC app/spdk_nvme_perf/perf.o 00:04:45.338 CXX test/cpp_headers/bit_array.o 00:04:45.339 CC examples/vmd/lsvmd/lsvmd.o 00:04:45.339 LINK hello_sock 00:04:45.339 CC test/env/pci/pci_ut.o 00:04:45.339 CC examples/idxd/perf/perf.o 00:04:45.339 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:45.339 LINK lsvmd 00:04:45.339 CXX test/cpp_headers/bit_pool.o 00:04:45.339 CC app/spdk_nvme_identify/identify.o 00:04:45.600 CXX test/cpp_headers/blob_bdev.o 00:04:45.600 CXX test/cpp_headers/blobfs_bdev.o 00:04:45.600 LINK hello_fsdev 00:04:45.600 CC examples/vmd/led/led.o 00:04:45.600 CC app/spdk_nvme_discover/discovery_aer.o 00:04:45.600 LINK idxd_perf 00:04:45.600 LINK pci_ut 00:04:45.860 LINK led 00:04:45.860 CXX test/cpp_headers/blobfs.o 00:04:45.860 LINK spdk_nvme_discover 00:04:45.860 CXX test/cpp_headers/blob.o 00:04:45.860 CXX test/cpp_headers/conf.o 00:04:45.860 LINK memory_ut 00:04:45.860 CC examples/accel/perf/accel_perf.o 00:04:46.119 CC test/event/event_perf/event_perf.o 00:04:46.119 CXX test/cpp_headers/config.o 00:04:46.119 CC test/event/reactor/reactor.o 00:04:46.119 CXX test/cpp_headers/cpuset.o 00:04:46.119 LINK event_perf 00:04:46.119 CXX test/cpp_headers/crc16.o 00:04:46.119 LINK spdk_nvme_perf 00:04:46.119 LINK reactor 00:04:46.377 CC test/event/reactor_perf/reactor_perf.o 00:04:46.377 CC test/nvme/aer/aer.o 00:04:46.377 LINK spdk_nvme_identify 00:04:46.377 CXX test/cpp_headers/crc32.o 00:04:46.377 CC examples/blob/hello_world/hello_blob.o 00:04:46.377 CC test/nvme/reset/reset.o 00:04:46.377 CC test/nvme/sgl/sgl.o 00:04:46.377 CC test/nvme/e2edp/nvme_dp.o 00:04:46.377 LINK reactor_perf 00:04:46.377 CXX test/cpp_headers/crc64.o 00:04:46.377 LINK iscsi_fuzz 00:04:46.377 LINK accel_perf 00:04:46.635 LINK hello_blob 00:04:46.635 CC app/spdk_top/spdk_top.o 00:04:46.635 LINK aer 00:04:46.635 CXX test/cpp_headers/dif.o 00:04:46.635 LINK reset 00:04:46.635 CC test/event/app_repeat/app_repeat.o 00:04:46.635 CXX test/cpp_headers/dma.o 00:04:46.635 LINK sgl 00:04:46.635 LINK nvme_dp 00:04:46.635 CXX test/cpp_headers/endian.o 00:04:46.893 CC test/nvme/overhead/overhead.o 00:04:46.893 CXX test/cpp_headers/env_dpdk.o 00:04:46.893 LINK app_repeat 00:04:46.893 CC test/event/scheduler/scheduler.o 00:04:46.893 CC examples/blob/cli/blobcli.o 00:04:46.893 CXX test/cpp_headers/env.o 00:04:46.893 CC test/nvme/err_injection/err_injection.o 00:04:46.893 CC test/rpc_client/rpc_client_test.o 00:04:46.893 CC examples/nvme/hello_world/hello_world.o 00:04:47.151 CC examples/nvme/reconnect/reconnect.o 00:04:47.151 CC examples/bdev/hello_world/hello_bdev.o 00:04:47.151 LINK overhead 00:04:47.151 LINK scheduler 00:04:47.151 CXX test/cpp_headers/event.o 00:04:47.151 LINK err_injection 00:04:47.151 LINK rpc_client_test 00:04:47.151 CXX test/cpp_headers/fd_group.o 00:04:47.151 LINK hello_world 00:04:47.151 CXX test/cpp_headers/fd.o 00:04:47.151 LINK hello_bdev 00:04:47.409 CC test/nvme/startup/startup.o 00:04:47.409 LINK blobcli 00:04:47.409 CXX test/cpp_headers/file.o 00:04:47.409 LINK reconnect 00:04:47.409 LINK spdk_top 00:04:47.409 CC app/spdk_dd/spdk_dd.o 00:04:47.409 CC app/vhost/vhost.o 00:04:47.409 LINK startup 00:04:47.667 CXX test/cpp_headers/fsdev.o 00:04:47.667 CC app/fio/nvme/fio_plugin.o 00:04:47.667 CC examples/bdev/bdevperf/bdevperf.o 00:04:47.667 CC test/accel/dif/dif.o 00:04:47.667 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:47.667 LINK vhost 00:04:47.667 CC examples/nvme/arbitration/arbitration.o 00:04:47.667 CC examples/nvme/hotplug/hotplug.o 00:04:47.667 CXX test/cpp_headers/fsdev_module.o 00:04:47.667 CC test/nvme/reserve/reserve.o 00:04:47.925 LINK spdk_dd 00:04:47.925 CXX test/cpp_headers/ftl.o 00:04:47.925 LINK hotplug 00:04:47.925 CC app/fio/bdev/fio_plugin.o 00:04:47.925 LINK reserve 00:04:48.183 CXX test/cpp_headers/gpt_spec.o 00:04:48.183 LINK arbitration 00:04:48.183 CXX test/cpp_headers/hexlify.o 00:04:48.183 LINK spdk_nvme 00:04:48.183 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:48.183 LINK nvme_manage 00:04:48.183 CXX test/cpp_headers/histogram_data.o 00:04:48.183 CXX test/cpp_headers/idxd.o 00:04:48.183 CC examples/nvme/abort/abort.o 00:04:48.442 CC test/nvme/simple_copy/simple_copy.o 00:04:48.442 LINK dif 00:04:48.442 LINK cmb_copy 00:04:48.442 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:48.442 CXX test/cpp_headers/idxd_spec.o 00:04:48.442 CXX test/cpp_headers/init.o 00:04:48.442 LINK bdevperf 00:04:48.442 LINK spdk_bdev 00:04:48.700 LINK pmr_persistence 00:04:48.700 CXX test/cpp_headers/ioat.o 00:04:48.700 LINK simple_copy 00:04:48.700 CC test/blobfs/mkfs/mkfs.o 00:04:48.700 CC test/nvme/connect_stress/connect_stress.o 00:04:48.700 CC test/nvme/boot_partition/boot_partition.o 00:04:48.700 LINK abort 00:04:48.700 CXX test/cpp_headers/ioat_spec.o 00:04:48.700 LINK mkfs 00:04:48.700 CC test/nvme/compliance/nvme_compliance.o 00:04:48.959 CC test/lvol/esnap/esnap.o 00:04:48.959 CC test/nvme/fused_ordering/fused_ordering.o 00:04:48.959 LINK boot_partition 00:04:48.959 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:48.959 LINK connect_stress 00:04:48.959 CC test/bdev/bdevio/bdevio.o 00:04:48.959 CXX test/cpp_headers/iscsi_spec.o 00:04:48.959 CXX test/cpp_headers/json.o 00:04:48.959 LINK doorbell_aers 00:04:48.959 LINK fused_ordering 00:04:48.959 CXX test/cpp_headers/jsonrpc.o 00:04:49.218 CC test/nvme/fdp/fdp.o 00:04:49.218 CC test/nvme/cuse/cuse.o 00:04:49.218 CC examples/nvmf/nvmf/nvmf.o 00:04:49.218 LINK nvme_compliance 00:04:49.218 CXX test/cpp_headers/keyring.o 00:04:49.218 CXX test/cpp_headers/keyring_module.o 00:04:49.218 CXX test/cpp_headers/likely.o 00:04:49.218 CXX test/cpp_headers/log.o 00:04:49.218 LINK bdevio 00:04:49.218 CXX test/cpp_headers/lvol.o 00:04:49.218 CXX test/cpp_headers/md5.o 00:04:49.476 CXX test/cpp_headers/memory.o 00:04:49.476 CXX test/cpp_headers/mmio.o 00:04:49.476 CXX test/cpp_headers/nbd.o 00:04:49.476 LINK nvmf 00:04:49.476 CXX test/cpp_headers/net.o 00:04:49.476 LINK fdp 00:04:49.476 CXX test/cpp_headers/notify.o 00:04:49.476 CXX test/cpp_headers/nvme.o 00:04:49.476 CXX test/cpp_headers/nvme_intel.o 00:04:49.476 CXX test/cpp_headers/nvme_ocssd.o 00:04:49.476 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:49.476 CXX test/cpp_headers/nvme_spec.o 00:04:49.476 CXX test/cpp_headers/nvme_zns.o 00:04:49.735 CXX test/cpp_headers/nvmf_cmd.o 00:04:49.735 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:49.735 CXX test/cpp_headers/nvmf.o 00:04:49.735 CXX test/cpp_headers/nvmf_spec.o 00:04:49.735 CXX test/cpp_headers/nvmf_transport.o 00:04:49.735 CXX test/cpp_headers/opal.o 00:04:49.735 CXX test/cpp_headers/opal_spec.o 00:04:49.735 CXX test/cpp_headers/pci_ids.o 00:04:49.735 CXX test/cpp_headers/pipe.o 00:04:49.735 CXX test/cpp_headers/queue.o 00:04:49.735 CXX test/cpp_headers/reduce.o 00:04:49.735 CXX test/cpp_headers/rpc.o 00:04:49.735 CXX test/cpp_headers/scheduler.o 00:04:49.735 CXX test/cpp_headers/scsi.o 00:04:49.735 CXX test/cpp_headers/scsi_spec.o 00:04:49.993 CXX test/cpp_headers/sock.o 00:04:49.993 CXX test/cpp_headers/stdinc.o 00:04:49.993 CXX test/cpp_headers/string.o 00:04:49.993 CXX test/cpp_headers/thread.o 00:04:49.994 CXX test/cpp_headers/trace.o 00:04:49.994 CXX test/cpp_headers/trace_parser.o 00:04:49.994 CXX test/cpp_headers/tree.o 00:04:49.994 CXX test/cpp_headers/ublk.o 00:04:49.994 CXX test/cpp_headers/util.o 00:04:49.994 CXX test/cpp_headers/uuid.o 00:04:49.994 CXX test/cpp_headers/version.o 00:04:49.994 CXX test/cpp_headers/vfio_user_pci.o 00:04:49.994 CXX test/cpp_headers/vfio_user_spec.o 00:04:49.994 CXX test/cpp_headers/vhost.o 00:04:50.253 CXX test/cpp_headers/vmd.o 00:04:50.253 CXX test/cpp_headers/xor.o 00:04:50.253 CXX test/cpp_headers/zipf.o 00:04:50.253 LINK cuse 00:04:54.448 LINK esnap 00:04:54.707 00:04:54.707 real 1m20.060s 00:04:54.707 user 5m40.445s 00:04:54.707 sys 1m13.746s 00:04:54.707 04:20:54 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:54.707 04:20:54 make -- common/autotest_common.sh@10 -- $ set +x 00:04:54.707 ************************************ 00:04:54.707 END TEST make 00:04:54.707 ************************************ 00:04:54.707 04:20:54 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:54.707 04:20:54 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:54.707 04:20:54 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:54.707 04:20:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:54.707 04:20:54 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:54.707 04:20:54 -- pm/common@44 -- $ pid=6209 00:04:54.707 04:20:54 -- pm/common@50 -- $ kill -TERM 6209 00:04:54.707 04:20:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:54.707 04:20:54 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:54.707 04:20:54 -- pm/common@44 -- $ pid=6211 00:04:54.707 04:20:54 -- pm/common@50 -- $ kill -TERM 6211 00:04:54.707 04:20:54 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:54.707 04:20:54 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:55.004 04:20:54 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:55.004 04:20:54 -- common/autotest_common.sh@1711 -- # lcov --version 00:04:55.004 04:20:54 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:55.004 04:20:54 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:55.004 04:20:54 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.004 04:20:54 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.004 04:20:54 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.004 04:20:54 -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.004 04:20:54 -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.004 04:20:54 -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.004 04:20:54 -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.004 04:20:54 -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.004 04:20:54 -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.004 04:20:54 -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.004 04:20:54 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.004 04:20:54 -- scripts/common.sh@344 -- # case "$op" in 00:04:55.004 04:20:54 -- scripts/common.sh@345 -- # : 1 00:04:55.004 04:20:54 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.004 04:20:54 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.004 04:20:54 -- scripts/common.sh@365 -- # decimal 1 00:04:55.004 04:20:54 -- scripts/common.sh@353 -- # local d=1 00:04:55.004 04:20:54 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.004 04:20:54 -- scripts/common.sh@355 -- # echo 1 00:04:55.004 04:20:54 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.004 04:20:54 -- scripts/common.sh@366 -- # decimal 2 00:04:55.004 04:20:54 -- scripts/common.sh@353 -- # local d=2 00:04:55.004 04:20:54 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.004 04:20:54 -- scripts/common.sh@355 -- # echo 2 00:04:55.004 04:20:54 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.004 04:20:54 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.004 04:20:54 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.004 04:20:54 -- scripts/common.sh@368 -- # return 0 00:04:55.004 04:20:54 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.004 04:20:54 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:55.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.004 --rc genhtml_branch_coverage=1 00:04:55.004 --rc genhtml_function_coverage=1 00:04:55.004 --rc genhtml_legend=1 00:04:55.004 --rc geninfo_all_blocks=1 00:04:55.004 --rc geninfo_unexecuted_blocks=1 00:04:55.004 00:04:55.004 ' 00:04:55.004 04:20:54 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:55.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.004 --rc genhtml_branch_coverage=1 00:04:55.004 --rc genhtml_function_coverage=1 00:04:55.004 --rc genhtml_legend=1 00:04:55.004 --rc geninfo_all_blocks=1 00:04:55.004 --rc geninfo_unexecuted_blocks=1 00:04:55.004 00:04:55.004 ' 00:04:55.004 04:20:54 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:55.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.004 --rc genhtml_branch_coverage=1 00:04:55.004 --rc genhtml_function_coverage=1 00:04:55.004 --rc genhtml_legend=1 00:04:55.004 --rc geninfo_all_blocks=1 00:04:55.004 --rc geninfo_unexecuted_blocks=1 00:04:55.004 00:04:55.004 ' 00:04:55.004 04:20:54 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:55.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.004 --rc genhtml_branch_coverage=1 00:04:55.004 --rc genhtml_function_coverage=1 00:04:55.004 --rc genhtml_legend=1 00:04:55.004 --rc geninfo_all_blocks=1 00:04:55.004 --rc geninfo_unexecuted_blocks=1 00:04:55.004 00:04:55.004 ' 00:04:55.004 04:20:54 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:55.004 04:20:54 -- nvmf/common.sh@7 -- # uname -s 00:04:55.004 04:20:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:55.004 04:20:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:55.004 04:20:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:55.004 04:20:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:55.004 04:20:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:55.004 04:20:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:55.004 04:20:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:55.004 04:20:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:55.004 04:20:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:55.004 04:20:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:55.004 04:20:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ab0d7d39-54b9-46b6-a8ab-fec082cf4a1e 00:04:55.004 04:20:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=ab0d7d39-54b9-46b6-a8ab-fec082cf4a1e 00:04:55.004 04:20:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:55.004 04:20:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:55.004 04:20:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:55.004 04:20:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:55.004 04:20:54 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:55.004 04:20:54 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:55.004 04:20:54 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:55.004 04:20:54 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:55.004 04:20:54 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:55.004 04:20:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.004 04:20:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.004 04:20:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.004 04:20:54 -- paths/export.sh@5 -- # export PATH 00:04:55.005 04:20:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.005 04:20:54 -- nvmf/common.sh@51 -- # : 0 00:04:55.005 04:20:54 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:55.005 04:20:54 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:55.005 04:20:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:55.005 04:20:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:55.005 04:20:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:55.005 04:20:54 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:55.005 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:55.005 04:20:54 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:55.005 04:20:54 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:55.005 04:20:54 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:55.005 04:20:54 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:55.005 04:20:54 -- spdk/autotest.sh@32 -- # uname -s 00:04:55.005 04:20:54 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:55.005 04:20:54 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:55.005 04:20:54 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:55.005 04:20:54 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:55.005 04:20:54 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:55.005 04:20:54 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:55.005 04:20:54 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:55.005 04:20:54 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:55.005 04:20:54 -- spdk/autotest.sh@48 -- # udevadm_pid=68312 00:04:55.005 04:20:54 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:55.005 04:20:54 -- pm/common@17 -- # local monitor 00:04:55.005 04:20:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:55.005 04:20:54 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:55.005 04:20:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:55.005 04:20:54 -- pm/common@25 -- # sleep 1 00:04:55.005 04:20:54 -- pm/common@21 -- # date +%s 00:04:55.005 04:20:54 -- pm/common@21 -- # date +%s 00:04:55.005 04:20:54 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734063654 00:04:55.005 04:20:55 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734063654 00:04:55.277 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734063654_collect-cpu-load.pm.log 00:04:55.277 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734063654_collect-vmstat.pm.log 00:04:56.211 04:20:55 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:56.211 04:20:55 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:56.211 04:20:55 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:56.211 04:20:55 -- common/autotest_common.sh@10 -- # set +x 00:04:56.211 04:20:56 -- spdk/autotest.sh@59 -- # create_test_list 00:04:56.211 04:20:56 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:56.211 04:20:56 -- common/autotest_common.sh@10 -- # set +x 00:04:56.211 04:20:56 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:56.211 04:20:56 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:56.211 04:20:56 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:56.211 04:20:56 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:56.211 04:20:56 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:56.211 04:20:56 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:56.211 04:20:56 -- common/autotest_common.sh@1457 -- # uname 00:04:56.211 04:20:56 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:56.211 04:20:56 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:56.211 04:20:56 -- common/autotest_common.sh@1477 -- # uname 00:04:56.211 04:20:56 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:56.211 04:20:56 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:56.211 04:20:56 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:56.211 lcov: LCOV version 1.15 00:04:56.211 04:20:56 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:11.107 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:11.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:26.003 04:21:25 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:26.003 04:21:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:26.003 04:21:25 -- common/autotest_common.sh@10 -- # set +x 00:05:26.003 04:21:25 -- spdk/autotest.sh@78 -- # rm -f 00:05:26.003 04:21:25 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:26.262 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:26.262 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:26.262 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:26.262 04:21:26 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:26.262 04:21:26 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:26.262 04:21:26 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:26.262 04:21:26 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:05:26.262 04:21:26 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:05:26.262 04:21:26 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:05:26.262 04:21:26 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:26.262 04:21:26 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:05:26.262 04:21:26 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:26.262 04:21:26 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:05:26.262 04:21:26 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:26.262 04:21:26 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:26.262 04:21:26 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:26.262 04:21:26 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:26.262 04:21:26 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:05:26.262 04:21:26 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:26.262 04:21:26 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:05:26.262 04:21:26 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:26.262 04:21:26 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:26.262 04:21:26 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:26.262 04:21:26 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:26.262 04:21:26 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:05:26.262 04:21:26 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:05:26.262 04:21:26 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:26.262 04:21:26 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:26.262 04:21:26 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:26.262 04:21:26 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:05:26.262 04:21:26 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:05:26.262 04:21:26 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:26.263 04:21:26 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:26.263 04:21:26 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:26.263 04:21:26 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:26.263 04:21:26 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:26.263 04:21:26 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:26.263 04:21:26 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:26.263 04:21:26 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:26.263 No valid GPT data, bailing 00:05:26.263 04:21:26 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:26.263 04:21:26 -- scripts/common.sh@394 -- # pt= 00:05:26.263 04:21:26 -- scripts/common.sh@395 -- # return 1 00:05:26.263 04:21:26 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:26.523 1+0 records in 00:05:26.523 1+0 records out 00:05:26.523 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00663959 s, 158 MB/s 00:05:26.523 04:21:26 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:26.523 04:21:26 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:26.523 04:21:26 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:26.523 04:21:26 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:26.523 04:21:26 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:26.523 No valid GPT data, bailing 00:05:26.523 04:21:26 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:26.523 04:21:26 -- scripts/common.sh@394 -- # pt= 00:05:26.523 04:21:26 -- scripts/common.sh@395 -- # return 1 00:05:26.523 04:21:26 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:26.523 1+0 records in 00:05:26.523 1+0 records out 00:05:26.523 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00677714 s, 155 MB/s 00:05:26.523 04:21:26 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:26.523 04:21:26 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:26.523 04:21:26 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:26.523 04:21:26 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:26.523 04:21:26 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:26.523 No valid GPT data, bailing 00:05:26.523 04:21:26 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:26.523 04:21:26 -- scripts/common.sh@394 -- # pt= 00:05:26.523 04:21:26 -- scripts/common.sh@395 -- # return 1 00:05:26.523 04:21:26 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:26.523 1+0 records in 00:05:26.523 1+0 records out 00:05:26.523 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00637657 s, 164 MB/s 00:05:26.523 04:21:26 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:26.523 04:21:26 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:26.523 04:21:26 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:26.523 04:21:26 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:26.523 04:21:26 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:26.523 No valid GPT data, bailing 00:05:26.523 04:21:26 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:26.523 04:21:26 -- scripts/common.sh@394 -- # pt= 00:05:26.523 04:21:26 -- scripts/common.sh@395 -- # return 1 00:05:26.523 04:21:26 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:26.523 1+0 records in 00:05:26.523 1+0 records out 00:05:26.523 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00413798 s, 253 MB/s 00:05:26.523 04:21:26 -- spdk/autotest.sh@105 -- # sync 00:05:26.783 04:21:26 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:26.783 04:21:26 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:26.783 04:21:26 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:30.077 04:21:29 -- spdk/autotest.sh@111 -- # uname -s 00:05:30.077 04:21:29 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:30.077 04:21:29 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:30.077 04:21:29 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:30.336 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:30.336 Hugepages 00:05:30.336 node hugesize free / total 00:05:30.336 node0 1048576kB 0 / 0 00:05:30.336 node0 2048kB 0 / 0 00:05:30.336 00:05:30.336 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:30.596 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:30.596 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:30.596 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:30.596 04:21:30 -- spdk/autotest.sh@117 -- # uname -s 00:05:30.596 04:21:30 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:30.596 04:21:30 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:30.596 04:21:30 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:31.536 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:31.536 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:31.795 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:31.795 04:21:31 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:32.735 04:21:32 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:32.735 04:21:32 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:32.735 04:21:32 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:32.735 04:21:32 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:32.735 04:21:32 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:32.735 04:21:32 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:32.735 04:21:32 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:32.735 04:21:32 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:32.735 04:21:32 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:32.735 04:21:32 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:32.735 04:21:32 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:32.735 04:21:32 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:33.302 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:33.302 Waiting for block devices as requested 00:05:33.302 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:33.562 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:33.562 04:21:33 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:33.562 04:21:33 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:33.562 04:21:33 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:33.562 04:21:33 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:33.562 04:21:33 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:33.562 04:21:33 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:33.562 04:21:33 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:33.562 04:21:33 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:33.562 04:21:33 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:33.562 04:21:33 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:33.562 04:21:33 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:33.562 04:21:33 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:33.562 04:21:33 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:33.562 04:21:33 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:33.562 04:21:33 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:33.562 04:21:33 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:33.562 04:21:33 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:33.562 04:21:33 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:33.562 04:21:33 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:33.562 04:21:33 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:33.562 04:21:33 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:33.562 04:21:33 -- common/autotest_common.sh@1543 -- # continue 00:05:33.562 04:21:33 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:33.562 04:21:33 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:33.562 04:21:33 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:33.562 04:21:33 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:33.562 04:21:33 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:33.562 04:21:33 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:33.562 04:21:33 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:33.562 04:21:33 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:33.562 04:21:33 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:33.562 04:21:33 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:33.562 04:21:33 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:33.562 04:21:33 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:33.562 04:21:33 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:33.562 04:21:33 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:33.562 04:21:33 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:33.562 04:21:33 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:33.562 04:21:33 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:33.562 04:21:33 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:33.562 04:21:33 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:33.562 04:21:33 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:33.562 04:21:33 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:33.562 04:21:33 -- common/autotest_common.sh@1543 -- # continue 00:05:33.562 04:21:33 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:33.562 04:21:33 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:33.562 04:21:33 -- common/autotest_common.sh@10 -- # set +x 00:05:33.822 04:21:33 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:33.822 04:21:33 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:33.822 04:21:33 -- common/autotest_common.sh@10 -- # set +x 00:05:33.822 04:21:33 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:34.392 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:34.652 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:34.652 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:34.652 04:21:34 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:34.652 04:21:34 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:34.652 04:21:34 -- common/autotest_common.sh@10 -- # set +x 00:05:34.912 04:21:34 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:34.912 04:21:34 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:34.912 04:21:34 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:34.912 04:21:34 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:34.912 04:21:34 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:34.912 04:21:34 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:34.912 04:21:34 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:34.912 04:21:34 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:34.912 04:21:34 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:34.912 04:21:34 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:34.912 04:21:34 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:34.912 04:21:34 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:34.912 04:21:34 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:34.912 04:21:34 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:34.912 04:21:34 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:34.912 04:21:34 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:34.913 04:21:34 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:34.913 04:21:34 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:34.913 04:21:34 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:34.913 04:21:34 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:34.913 04:21:34 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:34.913 04:21:34 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:34.913 04:21:34 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:34.913 04:21:34 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:34.913 04:21:34 -- common/autotest_common.sh@1572 -- # return 0 00:05:34.913 04:21:34 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:34.913 04:21:34 -- common/autotest_common.sh@1580 -- # return 0 00:05:34.913 04:21:34 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:34.913 04:21:34 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:34.913 04:21:34 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:34.913 04:21:34 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:34.913 04:21:34 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:34.913 04:21:34 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:34.913 04:21:34 -- common/autotest_common.sh@10 -- # set +x 00:05:34.913 04:21:34 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:34.913 04:21:34 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:34.913 04:21:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.913 04:21:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.913 04:21:34 -- common/autotest_common.sh@10 -- # set +x 00:05:34.913 ************************************ 00:05:34.913 START TEST env 00:05:34.913 ************************************ 00:05:34.913 04:21:34 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:34.913 * Looking for test storage... 00:05:35.183 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:35.183 04:21:34 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:35.183 04:21:34 env -- common/autotest_common.sh@1711 -- # lcov --version 00:05:35.183 04:21:34 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:35.183 04:21:35 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:35.183 04:21:35 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.183 04:21:35 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.183 04:21:35 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.183 04:21:35 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.183 04:21:35 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.183 04:21:35 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.183 04:21:35 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.183 04:21:35 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.183 04:21:35 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.183 04:21:35 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.183 04:21:35 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.183 04:21:35 env -- scripts/common.sh@344 -- # case "$op" in 00:05:35.183 04:21:35 env -- scripts/common.sh@345 -- # : 1 00:05:35.183 04:21:35 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.183 04:21:35 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.183 04:21:35 env -- scripts/common.sh@365 -- # decimal 1 00:05:35.183 04:21:35 env -- scripts/common.sh@353 -- # local d=1 00:05:35.183 04:21:35 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.183 04:21:35 env -- scripts/common.sh@355 -- # echo 1 00:05:35.183 04:21:35 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.183 04:21:35 env -- scripts/common.sh@366 -- # decimal 2 00:05:35.183 04:21:35 env -- scripts/common.sh@353 -- # local d=2 00:05:35.183 04:21:35 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.183 04:21:35 env -- scripts/common.sh@355 -- # echo 2 00:05:35.183 04:21:35 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.183 04:21:35 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.183 04:21:35 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.183 04:21:35 env -- scripts/common.sh@368 -- # return 0 00:05:35.183 04:21:35 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.183 04:21:35 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:35.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.183 --rc genhtml_branch_coverage=1 00:05:35.183 --rc genhtml_function_coverage=1 00:05:35.183 --rc genhtml_legend=1 00:05:35.183 --rc geninfo_all_blocks=1 00:05:35.183 --rc geninfo_unexecuted_blocks=1 00:05:35.183 00:05:35.183 ' 00:05:35.183 04:21:35 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:35.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.183 --rc genhtml_branch_coverage=1 00:05:35.183 --rc genhtml_function_coverage=1 00:05:35.183 --rc genhtml_legend=1 00:05:35.183 --rc geninfo_all_blocks=1 00:05:35.183 --rc geninfo_unexecuted_blocks=1 00:05:35.183 00:05:35.183 ' 00:05:35.183 04:21:35 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:35.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.183 --rc genhtml_branch_coverage=1 00:05:35.183 --rc genhtml_function_coverage=1 00:05:35.183 --rc genhtml_legend=1 00:05:35.183 --rc geninfo_all_blocks=1 00:05:35.183 --rc geninfo_unexecuted_blocks=1 00:05:35.183 00:05:35.184 ' 00:05:35.184 04:21:35 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:35.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.184 --rc genhtml_branch_coverage=1 00:05:35.184 --rc genhtml_function_coverage=1 00:05:35.184 --rc genhtml_legend=1 00:05:35.184 --rc geninfo_all_blocks=1 00:05:35.184 --rc geninfo_unexecuted_blocks=1 00:05:35.184 00:05:35.184 ' 00:05:35.184 04:21:35 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:35.184 04:21:35 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.184 04:21:35 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.184 04:21:35 env -- common/autotest_common.sh@10 -- # set +x 00:05:35.184 ************************************ 00:05:35.184 START TEST env_memory 00:05:35.184 ************************************ 00:05:35.184 04:21:35 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:35.184 00:05:35.184 00:05:35.184 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.184 http://cunit.sourceforge.net/ 00:05:35.184 00:05:35.184 00:05:35.184 Suite: memory 00:05:35.184 Test: alloc and free memory map ...[2024-12-13 04:21:35.101068] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:35.184 passed 00:05:35.184 Test: mem map translation ...[2024-12-13 04:21:35.141355] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:35.184 [2024-12-13 04:21:35.141393] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:35.184 [2024-12-13 04:21:35.141486] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:35.184 [2024-12-13 04:21:35.141521] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:35.446 passed 00:05:35.446 Test: mem map registration ...[2024-12-13 04:21:35.203896] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:35.446 [2024-12-13 04:21:35.203926] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:35.446 passed 00:05:35.446 Test: mem map adjacent registrations ...passed 00:05:35.446 00:05:35.446 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.446 suites 1 1 n/a 0 0 00:05:35.446 tests 4 4 4 0 0 00:05:35.446 asserts 152 152 152 0 n/a 00:05:35.446 00:05:35.446 Elapsed time = 0.225 seconds 00:05:35.446 00:05:35.446 real 0m0.277s 00:05:35.446 user 0m0.240s 00:05:35.446 sys 0m0.024s 00:05:35.446 04:21:35 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.446 04:21:35 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:35.446 ************************************ 00:05:35.446 END TEST env_memory 00:05:35.446 ************************************ 00:05:35.446 04:21:35 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:35.446 04:21:35 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.446 04:21:35 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.446 04:21:35 env -- common/autotest_common.sh@10 -- # set +x 00:05:35.446 ************************************ 00:05:35.446 START TEST env_vtophys 00:05:35.446 ************************************ 00:05:35.446 04:21:35 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:35.446 EAL: lib.eal log level changed from notice to debug 00:05:35.446 EAL: Detected lcore 0 as core 0 on socket 0 00:05:35.446 EAL: Detected lcore 1 as core 0 on socket 0 00:05:35.446 EAL: Detected lcore 2 as core 0 on socket 0 00:05:35.446 EAL: Detected lcore 3 as core 0 on socket 0 00:05:35.446 EAL: Detected lcore 4 as core 0 on socket 0 00:05:35.446 EAL: Detected lcore 5 as core 0 on socket 0 00:05:35.446 EAL: Detected lcore 6 as core 0 on socket 0 00:05:35.446 EAL: Detected lcore 7 as core 0 on socket 0 00:05:35.446 EAL: Detected lcore 8 as core 0 on socket 0 00:05:35.446 EAL: Detected lcore 9 as core 0 on socket 0 00:05:35.446 EAL: Maximum logical cores by configuration: 128 00:05:35.446 EAL: Detected CPU lcores: 10 00:05:35.446 EAL: Detected NUMA nodes: 1 00:05:35.447 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:35.447 EAL: Detected shared linkage of DPDK 00:05:35.447 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:35.447 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:35.447 EAL: Registered [vdev] bus. 00:05:35.447 EAL: bus.vdev log level changed from disabled to notice 00:05:35.447 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:35.447 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:35.447 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:35.447 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:35.447 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:35.447 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:35.447 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:35.447 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:35.447 EAL: No shared files mode enabled, IPC will be disabled 00:05:35.447 EAL: No shared files mode enabled, IPC is disabled 00:05:35.447 EAL: Selected IOVA mode 'PA' 00:05:35.447 EAL: Probing VFIO support... 00:05:35.447 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:35.447 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:35.447 EAL: Ask a virtual area of 0x2e000 bytes 00:05:35.447 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:35.447 EAL: Setting up physically contiguous memory... 00:05:35.447 EAL: Setting maximum number of open files to 524288 00:05:35.447 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:35.447 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:35.447 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.447 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:35.447 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:35.447 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.447 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:35.447 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:35.447 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.447 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:35.447 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:35.447 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.447 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:35.447 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:35.447 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.447 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:35.447 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:35.447 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.447 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:35.447 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:35.447 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.447 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:35.447 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:35.447 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.447 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:35.447 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:35.447 EAL: Hugepages will be freed exactly as allocated. 00:05:35.447 EAL: No shared files mode enabled, IPC is disabled 00:05:35.447 EAL: No shared files mode enabled, IPC is disabled 00:05:35.706 EAL: TSC frequency is ~2290000 KHz 00:05:35.706 EAL: Main lcore 0 is ready (tid=7f7f5bdd6a40;cpuset=[0]) 00:05:35.706 EAL: Trying to obtain current memory policy. 00:05:35.706 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.706 EAL: Restoring previous memory policy: 0 00:05:35.706 EAL: request: mp_malloc_sync 00:05:35.706 EAL: No shared files mode enabled, IPC is disabled 00:05:35.706 EAL: Heap on socket 0 was expanded by 2MB 00:05:35.706 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:35.706 EAL: No shared files mode enabled, IPC is disabled 00:05:35.706 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:35.706 EAL: Mem event callback 'spdk:(nil)' registered 00:05:35.706 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:35.706 00:05:35.706 00:05:35.706 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.706 http://cunit.sourceforge.net/ 00:05:35.706 00:05:35.706 00:05:35.706 Suite: components_suite 00:05:35.966 Test: vtophys_malloc_test ...passed 00:05:35.966 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:35.966 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.966 EAL: Restoring previous memory policy: 4 00:05:35.966 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.966 EAL: request: mp_malloc_sync 00:05:35.966 EAL: No shared files mode enabled, IPC is disabled 00:05:35.966 EAL: Heap on socket 0 was expanded by 4MB 00:05:35.966 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.966 EAL: request: mp_malloc_sync 00:05:35.966 EAL: No shared files mode enabled, IPC is disabled 00:05:35.966 EAL: Heap on socket 0 was shrunk by 4MB 00:05:35.966 EAL: Trying to obtain current memory policy. 00:05:35.966 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.966 EAL: Restoring previous memory policy: 4 00:05:35.966 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.966 EAL: request: mp_malloc_sync 00:05:35.966 EAL: No shared files mode enabled, IPC is disabled 00:05:35.966 EAL: Heap on socket 0 was expanded by 6MB 00:05:35.966 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.966 EAL: request: mp_malloc_sync 00:05:35.966 EAL: No shared files mode enabled, IPC is disabled 00:05:35.966 EAL: Heap on socket 0 was shrunk by 6MB 00:05:35.966 EAL: Trying to obtain current memory policy. 00:05:35.966 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.966 EAL: Restoring previous memory policy: 4 00:05:35.966 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.966 EAL: request: mp_malloc_sync 00:05:35.966 EAL: No shared files mode enabled, IPC is disabled 00:05:35.966 EAL: Heap on socket 0 was expanded by 10MB 00:05:35.966 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.966 EAL: request: mp_malloc_sync 00:05:35.966 EAL: No shared files mode enabled, IPC is disabled 00:05:35.966 EAL: Heap on socket 0 was shrunk by 10MB 00:05:35.966 EAL: Trying to obtain current memory policy. 00:05:35.966 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.966 EAL: Restoring previous memory policy: 4 00:05:35.966 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.966 EAL: request: mp_malloc_sync 00:05:35.966 EAL: No shared files mode enabled, IPC is disabled 00:05:35.966 EAL: Heap on socket 0 was expanded by 18MB 00:05:35.967 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.967 EAL: request: mp_malloc_sync 00:05:35.967 EAL: No shared files mode enabled, IPC is disabled 00:05:35.967 EAL: Heap on socket 0 was shrunk by 18MB 00:05:35.967 EAL: Trying to obtain current memory policy. 00:05:35.967 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.967 EAL: Restoring previous memory policy: 4 00:05:35.967 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.967 EAL: request: mp_malloc_sync 00:05:35.967 EAL: No shared files mode enabled, IPC is disabled 00:05:35.967 EAL: Heap on socket 0 was expanded by 34MB 00:05:35.967 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.967 EAL: request: mp_malloc_sync 00:05:35.967 EAL: No shared files mode enabled, IPC is disabled 00:05:35.967 EAL: Heap on socket 0 was shrunk by 34MB 00:05:35.967 EAL: Trying to obtain current memory policy. 00:05:35.967 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.967 EAL: Restoring previous memory policy: 4 00:05:35.967 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.967 EAL: request: mp_malloc_sync 00:05:35.967 EAL: No shared files mode enabled, IPC is disabled 00:05:35.967 EAL: Heap on socket 0 was expanded by 66MB 00:05:35.967 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.967 EAL: request: mp_malloc_sync 00:05:35.967 EAL: No shared files mode enabled, IPC is disabled 00:05:35.967 EAL: Heap on socket 0 was shrunk by 66MB 00:05:35.967 EAL: Trying to obtain current memory policy. 00:05:35.967 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.967 EAL: Restoring previous memory policy: 4 00:05:35.967 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.967 EAL: request: mp_malloc_sync 00:05:35.967 EAL: No shared files mode enabled, IPC is disabled 00:05:35.967 EAL: Heap on socket 0 was expanded by 130MB 00:05:36.227 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.227 EAL: request: mp_malloc_sync 00:05:36.227 EAL: No shared files mode enabled, IPC is disabled 00:05:36.227 EAL: Heap on socket 0 was shrunk by 130MB 00:05:36.227 EAL: Trying to obtain current memory policy. 00:05:36.227 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.227 EAL: Restoring previous memory policy: 4 00:05:36.227 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.227 EAL: request: mp_malloc_sync 00:05:36.227 EAL: No shared files mode enabled, IPC is disabled 00:05:36.227 EAL: Heap on socket 0 was expanded by 258MB 00:05:36.227 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.227 EAL: request: mp_malloc_sync 00:05:36.227 EAL: No shared files mode enabled, IPC is disabled 00:05:36.227 EAL: Heap on socket 0 was shrunk by 258MB 00:05:36.227 EAL: Trying to obtain current memory policy. 00:05:36.227 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.487 EAL: Restoring previous memory policy: 4 00:05:36.487 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.487 EAL: request: mp_malloc_sync 00:05:36.487 EAL: No shared files mode enabled, IPC is disabled 00:05:36.487 EAL: Heap on socket 0 was expanded by 514MB 00:05:36.487 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.487 EAL: request: mp_malloc_sync 00:05:36.487 EAL: No shared files mode enabled, IPC is disabled 00:05:36.487 EAL: Heap on socket 0 was shrunk by 514MB 00:05:36.487 EAL: Trying to obtain current memory policy. 00:05:36.487 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.746 EAL: Restoring previous memory policy: 4 00:05:36.746 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.746 EAL: request: mp_malloc_sync 00:05:36.746 EAL: No shared files mode enabled, IPC is disabled 00:05:36.746 EAL: Heap on socket 0 was expanded by 1026MB 00:05:37.007 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.007 passed 00:05:37.007 00:05:37.007 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.007 suites 1 1 n/a 0 0 00:05:37.007 tests 2 2 2 0 0 00:05:37.007 asserts 5442 5442 5442 0 n/a 00:05:37.007 00:05:37.007 Elapsed time = 1.378 seconds 00:05:37.007 EAL: request: mp_malloc_sync 00:05:37.007 EAL: No shared files mode enabled, IPC is disabled 00:05:37.007 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:37.007 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.007 EAL: request: mp_malloc_sync 00:05:37.007 EAL: No shared files mode enabled, IPC is disabled 00:05:37.007 EAL: Heap on socket 0 was shrunk by 2MB 00:05:37.007 EAL: No shared files mode enabled, IPC is disabled 00:05:37.007 EAL: No shared files mode enabled, IPC is disabled 00:05:37.007 EAL: No shared files mode enabled, IPC is disabled 00:05:37.007 00:05:37.007 real 0m1.630s 00:05:37.007 user 0m0.811s 00:05:37.007 sys 0m0.686s 00:05:37.007 04:21:36 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.007 04:21:37 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:37.007 ************************************ 00:05:37.007 END TEST env_vtophys 00:05:37.007 ************************************ 00:05:37.267 04:21:37 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:37.267 04:21:37 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.267 04:21:37 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.267 04:21:37 env -- common/autotest_common.sh@10 -- # set +x 00:05:37.267 ************************************ 00:05:37.267 START TEST env_pci 00:05:37.267 ************************************ 00:05:37.267 04:21:37 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:37.267 00:05:37.267 00:05:37.267 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.267 http://cunit.sourceforge.net/ 00:05:37.267 00:05:37.267 00:05:37.267 Suite: pci 00:05:37.267 Test: pci_hook ...[2024-12-13 04:21:37.097613] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 70561 has claimed it 00:05:37.267 passed 00:05:37.267 00:05:37.267 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.267 suites 1 1 n/a 0 0 00:05:37.267 tests 1 1 1 0 0 00:05:37.267 asserts 25 25 25 0 n/a 00:05:37.267 00:05:37.267 Elapsed time = 0.007 seconds 00:05:37.267 EAL: Cannot find device (10000:00:01.0) 00:05:37.267 EAL: Failed to attach device on primary process 00:05:37.267 00:05:37.267 real 0m0.084s 00:05:37.267 user 0m0.033s 00:05:37.267 sys 0m0.051s 00:05:37.267 04:21:37 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.267 04:21:37 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:37.267 ************************************ 00:05:37.267 END TEST env_pci 00:05:37.267 ************************************ 00:05:37.267 04:21:37 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:37.267 04:21:37 env -- env/env.sh@15 -- # uname 00:05:37.267 04:21:37 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:37.267 04:21:37 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:37.267 04:21:37 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:37.267 04:21:37 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:37.267 04:21:37 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.267 04:21:37 env -- common/autotest_common.sh@10 -- # set +x 00:05:37.267 ************************************ 00:05:37.267 START TEST env_dpdk_post_init 00:05:37.267 ************************************ 00:05:37.267 04:21:37 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:37.267 EAL: Detected CPU lcores: 10 00:05:37.267 EAL: Detected NUMA nodes: 1 00:05:37.267 EAL: Detected shared linkage of DPDK 00:05:37.267 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:37.267 EAL: Selected IOVA mode 'PA' 00:05:37.527 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:37.527 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:37.527 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:37.527 Starting DPDK initialization... 00:05:37.527 Starting SPDK post initialization... 00:05:37.527 SPDK NVMe probe 00:05:37.527 Attaching to 0000:00:10.0 00:05:37.527 Attaching to 0000:00:11.0 00:05:37.527 Attached to 0000:00:10.0 00:05:37.527 Attached to 0000:00:11.0 00:05:37.527 Cleaning up... 00:05:37.527 00:05:37.527 real 0m0.239s 00:05:37.527 user 0m0.075s 00:05:37.527 sys 0m0.065s 00:05:37.527 04:21:37 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.527 04:21:37 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:37.527 ************************************ 00:05:37.527 END TEST env_dpdk_post_init 00:05:37.527 ************************************ 00:05:37.527 04:21:37 env -- env/env.sh@26 -- # uname 00:05:37.527 04:21:37 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:37.527 04:21:37 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:37.527 04:21:37 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.527 04:21:37 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.527 04:21:37 env -- common/autotest_common.sh@10 -- # set +x 00:05:37.527 ************************************ 00:05:37.527 START TEST env_mem_callbacks 00:05:37.527 ************************************ 00:05:37.527 04:21:37 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:37.787 EAL: Detected CPU lcores: 10 00:05:37.787 EAL: Detected NUMA nodes: 1 00:05:37.787 EAL: Detected shared linkage of DPDK 00:05:37.787 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:37.787 EAL: Selected IOVA mode 'PA' 00:05:37.787 00:05:37.787 00:05:37.787 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.787 http://cunit.sourceforge.net/ 00:05:37.787 00:05:37.787 00:05:37.787 Suite: memory 00:05:37.787 Test: test ... 00:05:37.787 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:37.787 register 0x200000200000 2097152 00:05:37.787 malloc 3145728 00:05:37.787 register 0x200000400000 4194304 00:05:37.787 buf 0x200000500000 len 3145728 PASSED 00:05:37.787 malloc 64 00:05:37.787 buf 0x2000004fff40 len 64 PASSED 00:05:37.788 malloc 4194304 00:05:37.788 register 0x200000800000 6291456 00:05:37.788 buf 0x200000a00000 len 4194304 PASSED 00:05:37.788 free 0x200000500000 3145728 00:05:37.788 free 0x2000004fff40 64 00:05:37.788 unregister 0x200000400000 4194304 PASSED 00:05:37.788 free 0x200000a00000 4194304 00:05:37.788 unregister 0x200000800000 6291456 PASSED 00:05:37.788 malloc 8388608 00:05:37.788 register 0x200000400000 10485760 00:05:37.788 buf 0x200000600000 len 8388608 PASSED 00:05:37.788 free 0x200000600000 8388608 00:05:37.788 unregister 0x200000400000 10485760 PASSED 00:05:37.788 passed 00:05:37.788 00:05:37.788 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.788 suites 1 1 n/a 0 0 00:05:37.788 tests 1 1 1 0 0 00:05:37.788 asserts 15 15 15 0 n/a 00:05:37.788 00:05:37.788 Elapsed time = 0.011 seconds 00:05:37.788 00:05:37.788 real 0m0.183s 00:05:37.788 user 0m0.029s 00:05:37.788 sys 0m0.052s 00:05:37.788 04:21:37 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.788 04:21:37 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:37.788 ************************************ 00:05:37.788 END TEST env_mem_callbacks 00:05:37.788 ************************************ 00:05:37.788 00:05:37.788 real 0m2.960s 00:05:37.788 user 0m1.398s 00:05:37.788 sys 0m1.242s 00:05:37.788 04:21:37 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.788 04:21:37 env -- common/autotest_common.sh@10 -- # set +x 00:05:37.788 ************************************ 00:05:37.788 END TEST env 00:05:37.788 ************************************ 00:05:38.048 04:21:37 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:38.048 04:21:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.048 04:21:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.048 04:21:37 -- common/autotest_common.sh@10 -- # set +x 00:05:38.048 ************************************ 00:05:38.048 START TEST rpc 00:05:38.048 ************************************ 00:05:38.048 04:21:37 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:38.048 * Looking for test storage... 00:05:38.048 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:38.048 04:21:37 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:38.048 04:21:37 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:38.048 04:21:37 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:38.048 04:21:38 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:38.048 04:21:38 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:38.048 04:21:38 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:38.048 04:21:38 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:38.048 04:21:38 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.048 04:21:38 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:38.048 04:21:38 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:38.048 04:21:38 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:38.048 04:21:38 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:38.048 04:21:38 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:38.048 04:21:38 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:38.048 04:21:38 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:38.049 04:21:38 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:38.049 04:21:38 rpc -- scripts/common.sh@345 -- # : 1 00:05:38.049 04:21:38 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:38.049 04:21:38 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.049 04:21:38 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:38.049 04:21:38 rpc -- scripts/common.sh@353 -- # local d=1 00:05:38.049 04:21:38 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.049 04:21:38 rpc -- scripts/common.sh@355 -- # echo 1 00:05:38.049 04:21:38 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:38.049 04:21:38 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:38.049 04:21:38 rpc -- scripts/common.sh@353 -- # local d=2 00:05:38.049 04:21:38 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.049 04:21:38 rpc -- scripts/common.sh@355 -- # echo 2 00:05:38.049 04:21:38 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:38.049 04:21:38 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:38.049 04:21:38 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:38.049 04:21:38 rpc -- scripts/common.sh@368 -- # return 0 00:05:38.049 04:21:38 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.049 04:21:38 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:38.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.049 --rc genhtml_branch_coverage=1 00:05:38.049 --rc genhtml_function_coverage=1 00:05:38.049 --rc genhtml_legend=1 00:05:38.049 --rc geninfo_all_blocks=1 00:05:38.049 --rc geninfo_unexecuted_blocks=1 00:05:38.049 00:05:38.049 ' 00:05:38.049 04:21:38 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:38.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.049 --rc genhtml_branch_coverage=1 00:05:38.049 --rc genhtml_function_coverage=1 00:05:38.049 --rc genhtml_legend=1 00:05:38.049 --rc geninfo_all_blocks=1 00:05:38.049 --rc geninfo_unexecuted_blocks=1 00:05:38.049 00:05:38.049 ' 00:05:38.049 04:21:38 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:38.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.049 --rc genhtml_branch_coverage=1 00:05:38.049 --rc genhtml_function_coverage=1 00:05:38.049 --rc genhtml_legend=1 00:05:38.049 --rc geninfo_all_blocks=1 00:05:38.049 --rc geninfo_unexecuted_blocks=1 00:05:38.049 00:05:38.049 ' 00:05:38.049 04:21:38 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:38.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.049 --rc genhtml_branch_coverage=1 00:05:38.049 --rc genhtml_function_coverage=1 00:05:38.049 --rc genhtml_legend=1 00:05:38.049 --rc geninfo_all_blocks=1 00:05:38.049 --rc geninfo_unexecuted_blocks=1 00:05:38.049 00:05:38.049 ' 00:05:38.049 04:21:38 rpc -- rpc/rpc.sh@65 -- # spdk_pid=70688 00:05:38.049 04:21:38 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:38.049 04:21:38 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:38.049 04:21:38 rpc -- rpc/rpc.sh@67 -- # waitforlisten 70688 00:05:38.049 04:21:38 rpc -- common/autotest_common.sh@835 -- # '[' -z 70688 ']' 00:05:38.049 04:21:38 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.049 04:21:38 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.049 04:21:38 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.049 04:21:38 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.049 04:21:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.309 [2024-12-13 04:21:38.146878] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:38.309 [2024-12-13 04:21:38.147046] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70688 ] 00:05:38.309 [2024-12-13 04:21:38.300986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.568 [2024-12-13 04:21:38.328287] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:38.568 [2024-12-13 04:21:38.328339] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 70688' to capture a snapshot of events at runtime. 00:05:38.569 [2024-12-13 04:21:38.328351] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:38.569 [2024-12-13 04:21:38.328360] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:38.569 [2024-12-13 04:21:38.328383] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid70688 for offline analysis/debug. 00:05:38.569 [2024-12-13 04:21:38.328755] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.138 04:21:38 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.138 04:21:38 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:39.138 04:21:38 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:39.138 04:21:38 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:39.138 04:21:38 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:39.138 04:21:38 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:39.138 04:21:38 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.138 04:21:38 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.138 04:21:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.138 ************************************ 00:05:39.138 START TEST rpc_integrity 00:05:39.139 ************************************ 00:05:39.139 04:21:38 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:39.139 04:21:38 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:39.139 04:21:38 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.139 04:21:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.139 04:21:38 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.139 04:21:38 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:39.139 04:21:38 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:39.139 04:21:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:39.139 04:21:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:39.139 04:21:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.139 04:21:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.139 04:21:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.139 04:21:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:39.139 04:21:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:39.139 04:21:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.139 04:21:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.139 04:21:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.139 04:21:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:39.139 { 00:05:39.139 "name": "Malloc0", 00:05:39.139 "aliases": [ 00:05:39.139 "5d6d38b8-7199-4ab8-8a79-c04afcd52233" 00:05:39.139 ], 00:05:39.139 "product_name": "Malloc disk", 00:05:39.139 "block_size": 512, 00:05:39.139 "num_blocks": 16384, 00:05:39.139 "uuid": "5d6d38b8-7199-4ab8-8a79-c04afcd52233", 00:05:39.139 "assigned_rate_limits": { 00:05:39.139 "rw_ios_per_sec": 0, 00:05:39.139 "rw_mbytes_per_sec": 0, 00:05:39.139 "r_mbytes_per_sec": 0, 00:05:39.139 "w_mbytes_per_sec": 0 00:05:39.139 }, 00:05:39.139 "claimed": false, 00:05:39.139 "zoned": false, 00:05:39.139 "supported_io_types": { 00:05:39.139 "read": true, 00:05:39.139 "write": true, 00:05:39.139 "unmap": true, 00:05:39.139 "flush": true, 00:05:39.139 "reset": true, 00:05:39.139 "nvme_admin": false, 00:05:39.139 "nvme_io": false, 00:05:39.139 "nvme_io_md": false, 00:05:39.139 "write_zeroes": true, 00:05:39.139 "zcopy": true, 00:05:39.139 "get_zone_info": false, 00:05:39.139 "zone_management": false, 00:05:39.139 "zone_append": false, 00:05:39.139 "compare": false, 00:05:39.139 "compare_and_write": false, 00:05:39.139 "abort": true, 00:05:39.139 "seek_hole": false, 00:05:39.139 "seek_data": false, 00:05:39.139 "copy": true, 00:05:39.139 "nvme_iov_md": false 00:05:39.139 }, 00:05:39.139 "memory_domains": [ 00:05:39.139 { 00:05:39.139 "dma_device_id": "system", 00:05:39.139 "dma_device_type": 1 00:05:39.139 }, 00:05:39.139 { 00:05:39.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:39.139 "dma_device_type": 2 00:05:39.139 } 00:05:39.139 ], 00:05:39.139 "driver_specific": {} 00:05:39.139 } 00:05:39.139 ]' 00:05:39.139 04:21:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:39.139 04:21:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:39.139 04:21:39 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:39.139 04:21:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.139 04:21:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.139 [2024-12-13 04:21:39.120761] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:39.139 [2024-12-13 04:21:39.120820] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:39.139 [2024-12-13 04:21:39.120848] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:05:39.139 [2024-12-13 04:21:39.120858] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:39.139 [2024-12-13 04:21:39.123156] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:39.139 [2024-12-13 04:21:39.123195] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:39.139 Passthru0 00:05:39.139 04:21:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.139 04:21:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:39.139 04:21:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.139 04:21:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.139 04:21:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.139 04:21:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:39.139 { 00:05:39.139 "name": "Malloc0", 00:05:39.139 "aliases": [ 00:05:39.139 "5d6d38b8-7199-4ab8-8a79-c04afcd52233" 00:05:39.139 ], 00:05:39.139 "product_name": "Malloc disk", 00:05:39.139 "block_size": 512, 00:05:39.139 "num_blocks": 16384, 00:05:39.139 "uuid": "5d6d38b8-7199-4ab8-8a79-c04afcd52233", 00:05:39.139 "assigned_rate_limits": { 00:05:39.139 "rw_ios_per_sec": 0, 00:05:39.139 "rw_mbytes_per_sec": 0, 00:05:39.139 "r_mbytes_per_sec": 0, 00:05:39.139 "w_mbytes_per_sec": 0 00:05:39.139 }, 00:05:39.139 "claimed": true, 00:05:39.139 "claim_type": "exclusive_write", 00:05:39.139 "zoned": false, 00:05:39.139 "supported_io_types": { 00:05:39.139 "read": true, 00:05:39.139 "write": true, 00:05:39.139 "unmap": true, 00:05:39.139 "flush": true, 00:05:39.139 "reset": true, 00:05:39.139 "nvme_admin": false, 00:05:39.139 "nvme_io": false, 00:05:39.139 "nvme_io_md": false, 00:05:39.139 "write_zeroes": true, 00:05:39.139 "zcopy": true, 00:05:39.139 "get_zone_info": false, 00:05:39.139 "zone_management": false, 00:05:39.139 "zone_append": false, 00:05:39.139 "compare": false, 00:05:39.139 "compare_and_write": false, 00:05:39.139 "abort": true, 00:05:39.139 "seek_hole": false, 00:05:39.139 "seek_data": false, 00:05:39.139 "copy": true, 00:05:39.139 "nvme_iov_md": false 00:05:39.139 }, 00:05:39.139 "memory_domains": [ 00:05:39.139 { 00:05:39.139 "dma_device_id": "system", 00:05:39.139 "dma_device_type": 1 00:05:39.139 }, 00:05:39.139 { 00:05:39.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:39.139 "dma_device_type": 2 00:05:39.139 } 00:05:39.139 ], 00:05:39.139 "driver_specific": {} 00:05:39.139 }, 00:05:39.139 { 00:05:39.139 "name": "Passthru0", 00:05:39.139 "aliases": [ 00:05:39.139 "9fb30919-9826-5e0f-ac71-c94e05582b09" 00:05:39.139 ], 00:05:39.139 "product_name": "passthru", 00:05:39.139 "block_size": 512, 00:05:39.139 "num_blocks": 16384, 00:05:39.139 "uuid": "9fb30919-9826-5e0f-ac71-c94e05582b09", 00:05:39.139 "assigned_rate_limits": { 00:05:39.139 "rw_ios_per_sec": 0, 00:05:39.139 "rw_mbytes_per_sec": 0, 00:05:39.139 "r_mbytes_per_sec": 0, 00:05:39.139 "w_mbytes_per_sec": 0 00:05:39.139 }, 00:05:39.139 "claimed": false, 00:05:39.139 "zoned": false, 00:05:39.139 "supported_io_types": { 00:05:39.139 "read": true, 00:05:39.139 "write": true, 00:05:39.139 "unmap": true, 00:05:39.139 "flush": true, 00:05:39.139 "reset": true, 00:05:39.139 "nvme_admin": false, 00:05:39.139 "nvme_io": false, 00:05:39.139 "nvme_io_md": false, 00:05:39.139 "write_zeroes": true, 00:05:39.139 "zcopy": true, 00:05:39.139 "get_zone_info": false, 00:05:39.139 "zone_management": false, 00:05:39.139 "zone_append": false, 00:05:39.139 "compare": false, 00:05:39.139 "compare_and_write": false, 00:05:39.139 "abort": true, 00:05:39.139 "seek_hole": false, 00:05:39.139 "seek_data": false, 00:05:39.139 "copy": true, 00:05:39.139 "nvme_iov_md": false 00:05:39.139 }, 00:05:39.139 "memory_domains": [ 00:05:39.139 { 00:05:39.139 "dma_device_id": "system", 00:05:39.139 "dma_device_type": 1 00:05:39.139 }, 00:05:39.139 { 00:05:39.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:39.139 "dma_device_type": 2 00:05:39.139 } 00:05:39.139 ], 00:05:39.139 "driver_specific": { 00:05:39.139 "passthru": { 00:05:39.139 "name": "Passthru0", 00:05:39.139 "base_bdev_name": "Malloc0" 00:05:39.139 } 00:05:39.139 } 00:05:39.139 } 00:05:39.139 ]' 00:05:39.139 04:21:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:39.399 04:21:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:39.399 04:21:39 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:39.399 04:21:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.399 04:21:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.399 04:21:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.399 04:21:39 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:39.399 04:21:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.399 04:21:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.399 04:21:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.399 04:21:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:39.399 04:21:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.399 04:21:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.399 04:21:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.399 04:21:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:39.399 04:21:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:39.399 04:21:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:39.399 00:05:39.399 real 0m0.319s 00:05:39.399 user 0m0.193s 00:05:39.399 sys 0m0.048s 00:05:39.399 04:21:39 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.399 04:21:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.399 ************************************ 00:05:39.399 END TEST rpc_integrity 00:05:39.399 ************************************ 00:05:39.399 04:21:39 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:39.399 04:21:39 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.399 04:21:39 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.399 04:21:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.399 ************************************ 00:05:39.399 START TEST rpc_plugins 00:05:39.399 ************************************ 00:05:39.399 04:21:39 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:39.399 04:21:39 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:39.399 04:21:39 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.399 04:21:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:39.399 04:21:39 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.399 04:21:39 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:39.399 04:21:39 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:39.399 04:21:39 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.399 04:21:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:39.399 04:21:39 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.399 04:21:39 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:39.399 { 00:05:39.399 "name": "Malloc1", 00:05:39.399 "aliases": [ 00:05:39.399 "5b5d312f-f15f-43c0-819f-f15ed0069e97" 00:05:39.399 ], 00:05:39.399 "product_name": "Malloc disk", 00:05:39.399 "block_size": 4096, 00:05:39.399 "num_blocks": 256, 00:05:39.399 "uuid": "5b5d312f-f15f-43c0-819f-f15ed0069e97", 00:05:39.399 "assigned_rate_limits": { 00:05:39.399 "rw_ios_per_sec": 0, 00:05:39.399 "rw_mbytes_per_sec": 0, 00:05:39.399 "r_mbytes_per_sec": 0, 00:05:39.399 "w_mbytes_per_sec": 0 00:05:39.399 }, 00:05:39.399 "claimed": false, 00:05:39.399 "zoned": false, 00:05:39.399 "supported_io_types": { 00:05:39.399 "read": true, 00:05:39.399 "write": true, 00:05:39.399 "unmap": true, 00:05:39.399 "flush": true, 00:05:39.399 "reset": true, 00:05:39.399 "nvme_admin": false, 00:05:39.399 "nvme_io": false, 00:05:39.399 "nvme_io_md": false, 00:05:39.399 "write_zeroes": true, 00:05:39.399 "zcopy": true, 00:05:39.399 "get_zone_info": false, 00:05:39.399 "zone_management": false, 00:05:39.399 "zone_append": false, 00:05:39.399 "compare": false, 00:05:39.399 "compare_and_write": false, 00:05:39.399 "abort": true, 00:05:39.399 "seek_hole": false, 00:05:39.399 "seek_data": false, 00:05:39.399 "copy": true, 00:05:39.399 "nvme_iov_md": false 00:05:39.399 }, 00:05:39.399 "memory_domains": [ 00:05:39.399 { 00:05:39.399 "dma_device_id": "system", 00:05:39.399 "dma_device_type": 1 00:05:39.399 }, 00:05:39.399 { 00:05:39.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:39.399 "dma_device_type": 2 00:05:39.399 } 00:05:39.399 ], 00:05:39.399 "driver_specific": {} 00:05:39.399 } 00:05:39.399 ]' 00:05:39.399 04:21:39 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:39.658 04:21:39 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:39.659 04:21:39 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:39.659 04:21:39 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.659 04:21:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:39.659 04:21:39 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.659 04:21:39 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:39.659 04:21:39 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.659 04:21:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:39.659 04:21:39 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.659 04:21:39 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:39.659 04:21:39 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:39.659 04:21:39 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:39.659 00:05:39.659 real 0m0.155s 00:05:39.659 user 0m0.086s 00:05:39.659 sys 0m0.030s 00:05:39.659 04:21:39 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.659 04:21:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:39.659 ************************************ 00:05:39.659 END TEST rpc_plugins 00:05:39.659 ************************************ 00:05:39.659 04:21:39 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:39.659 04:21:39 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.659 04:21:39 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.659 04:21:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.659 ************************************ 00:05:39.659 START TEST rpc_trace_cmd_test 00:05:39.659 ************************************ 00:05:39.659 04:21:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:39.659 04:21:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:39.659 04:21:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:39.659 04:21:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.659 04:21:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:39.659 04:21:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.659 04:21:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:39.659 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid70688", 00:05:39.659 "tpoint_group_mask": "0x8", 00:05:39.659 "iscsi_conn": { 00:05:39.659 "mask": "0x2", 00:05:39.659 "tpoint_mask": "0x0" 00:05:39.659 }, 00:05:39.659 "scsi": { 00:05:39.659 "mask": "0x4", 00:05:39.659 "tpoint_mask": "0x0" 00:05:39.659 }, 00:05:39.659 "bdev": { 00:05:39.659 "mask": "0x8", 00:05:39.659 "tpoint_mask": "0xffffffffffffffff" 00:05:39.659 }, 00:05:39.659 "nvmf_rdma": { 00:05:39.659 "mask": "0x10", 00:05:39.659 "tpoint_mask": "0x0" 00:05:39.659 }, 00:05:39.659 "nvmf_tcp": { 00:05:39.659 "mask": "0x20", 00:05:39.659 "tpoint_mask": "0x0" 00:05:39.659 }, 00:05:39.659 "ftl": { 00:05:39.659 "mask": "0x40", 00:05:39.659 "tpoint_mask": "0x0" 00:05:39.659 }, 00:05:39.659 "blobfs": { 00:05:39.659 "mask": "0x80", 00:05:39.659 "tpoint_mask": "0x0" 00:05:39.659 }, 00:05:39.659 "dsa": { 00:05:39.659 "mask": "0x200", 00:05:39.659 "tpoint_mask": "0x0" 00:05:39.659 }, 00:05:39.659 "thread": { 00:05:39.659 "mask": "0x400", 00:05:39.659 "tpoint_mask": "0x0" 00:05:39.659 }, 00:05:39.659 "nvme_pcie": { 00:05:39.659 "mask": "0x800", 00:05:39.659 "tpoint_mask": "0x0" 00:05:39.659 }, 00:05:39.659 "iaa": { 00:05:39.659 "mask": "0x1000", 00:05:39.659 "tpoint_mask": "0x0" 00:05:39.659 }, 00:05:39.659 "nvme_tcp": { 00:05:39.659 "mask": "0x2000", 00:05:39.659 "tpoint_mask": "0x0" 00:05:39.659 }, 00:05:39.659 "bdev_nvme": { 00:05:39.659 "mask": "0x4000", 00:05:39.659 "tpoint_mask": "0x0" 00:05:39.659 }, 00:05:39.659 "sock": { 00:05:39.659 "mask": "0x8000", 00:05:39.659 "tpoint_mask": "0x0" 00:05:39.659 }, 00:05:39.659 "blob": { 00:05:39.659 "mask": "0x10000", 00:05:39.659 "tpoint_mask": "0x0" 00:05:39.659 }, 00:05:39.659 "bdev_raid": { 00:05:39.659 "mask": "0x20000", 00:05:39.659 "tpoint_mask": "0x0" 00:05:39.659 }, 00:05:39.659 "scheduler": { 00:05:39.659 "mask": "0x40000", 00:05:39.659 "tpoint_mask": "0x0" 00:05:39.659 } 00:05:39.659 }' 00:05:39.659 04:21:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:39.659 04:21:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:39.659 04:21:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:39.659 04:21:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:39.659 04:21:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:39.919 04:21:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:39.919 04:21:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:39.919 04:21:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:39.919 04:21:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:39.919 04:21:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:39.919 00:05:39.919 real 0m0.232s 00:05:39.919 user 0m0.183s 00:05:39.919 sys 0m0.041s 00:05:39.919 04:21:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.919 04:21:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:39.919 ************************************ 00:05:39.919 END TEST rpc_trace_cmd_test 00:05:39.919 ************************************ 00:05:39.919 04:21:39 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:39.919 04:21:39 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:39.919 04:21:39 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:39.919 04:21:39 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.919 04:21:39 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.919 04:21:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.919 ************************************ 00:05:39.919 START TEST rpc_daemon_integrity 00:05:39.919 ************************************ 00:05:39.919 04:21:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:39.919 04:21:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:39.919 04:21:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.919 04:21:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.919 04:21:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.919 04:21:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:39.919 04:21:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:39.919 04:21:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:39.919 04:21:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:39.919 04:21:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.919 04:21:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.919 04:21:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.919 04:21:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:39.919 04:21:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:39.919 04:21:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.919 04:21:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.919 04:21:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.919 04:21:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:39.919 { 00:05:39.919 "name": "Malloc2", 00:05:39.919 "aliases": [ 00:05:39.919 "790494de-1fb2-4548-a0c6-bcefa3ec1483" 00:05:39.919 ], 00:05:39.919 "product_name": "Malloc disk", 00:05:39.919 "block_size": 512, 00:05:39.919 "num_blocks": 16384, 00:05:39.919 "uuid": "790494de-1fb2-4548-a0c6-bcefa3ec1483", 00:05:39.919 "assigned_rate_limits": { 00:05:39.919 "rw_ios_per_sec": 0, 00:05:39.919 "rw_mbytes_per_sec": 0, 00:05:39.919 "r_mbytes_per_sec": 0, 00:05:39.919 "w_mbytes_per_sec": 0 00:05:39.919 }, 00:05:39.919 "claimed": false, 00:05:39.919 "zoned": false, 00:05:39.919 "supported_io_types": { 00:05:39.919 "read": true, 00:05:39.919 "write": true, 00:05:39.919 "unmap": true, 00:05:39.919 "flush": true, 00:05:39.919 "reset": true, 00:05:39.919 "nvme_admin": false, 00:05:39.919 "nvme_io": false, 00:05:39.919 "nvme_io_md": false, 00:05:39.919 "write_zeroes": true, 00:05:39.919 "zcopy": true, 00:05:39.919 "get_zone_info": false, 00:05:39.919 "zone_management": false, 00:05:39.919 "zone_append": false, 00:05:39.919 "compare": false, 00:05:39.919 "compare_and_write": false, 00:05:39.919 "abort": true, 00:05:39.919 "seek_hole": false, 00:05:39.919 "seek_data": false, 00:05:39.919 "copy": true, 00:05:39.919 "nvme_iov_md": false 00:05:39.919 }, 00:05:39.919 "memory_domains": [ 00:05:39.919 { 00:05:39.919 "dma_device_id": "system", 00:05:39.919 "dma_device_type": 1 00:05:39.919 }, 00:05:39.919 { 00:05:39.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:39.919 "dma_device_type": 2 00:05:39.919 } 00:05:39.919 ], 00:05:39.919 "driver_specific": {} 00:05:39.919 } 00:05:39.919 ]' 00:05:40.179 04:21:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:40.179 04:21:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:40.179 04:21:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:40.179 04:21:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.179 04:21:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.179 [2024-12-13 04:21:39.999930] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:40.179 [2024-12-13 04:21:39.999993] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:40.179 [2024-12-13 04:21:40.000015] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:05:40.179 [2024-12-13 04:21:40.000023] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:40.179 [2024-12-13 04:21:40.002263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:40.179 [2024-12-13 04:21:40.002341] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:40.179 Passthru0 00:05:40.179 04:21:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.179 04:21:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:40.179 04:21:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.179 04:21:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.179 04:21:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.179 04:21:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:40.179 { 00:05:40.179 "name": "Malloc2", 00:05:40.179 "aliases": [ 00:05:40.179 "790494de-1fb2-4548-a0c6-bcefa3ec1483" 00:05:40.179 ], 00:05:40.179 "product_name": "Malloc disk", 00:05:40.179 "block_size": 512, 00:05:40.179 "num_blocks": 16384, 00:05:40.179 "uuid": "790494de-1fb2-4548-a0c6-bcefa3ec1483", 00:05:40.179 "assigned_rate_limits": { 00:05:40.179 "rw_ios_per_sec": 0, 00:05:40.179 "rw_mbytes_per_sec": 0, 00:05:40.179 "r_mbytes_per_sec": 0, 00:05:40.179 "w_mbytes_per_sec": 0 00:05:40.179 }, 00:05:40.179 "claimed": true, 00:05:40.179 "claim_type": "exclusive_write", 00:05:40.179 "zoned": false, 00:05:40.179 "supported_io_types": { 00:05:40.179 "read": true, 00:05:40.179 "write": true, 00:05:40.179 "unmap": true, 00:05:40.179 "flush": true, 00:05:40.179 "reset": true, 00:05:40.179 "nvme_admin": false, 00:05:40.179 "nvme_io": false, 00:05:40.179 "nvme_io_md": false, 00:05:40.179 "write_zeroes": true, 00:05:40.179 "zcopy": true, 00:05:40.179 "get_zone_info": false, 00:05:40.179 "zone_management": false, 00:05:40.179 "zone_append": false, 00:05:40.179 "compare": false, 00:05:40.179 "compare_and_write": false, 00:05:40.179 "abort": true, 00:05:40.179 "seek_hole": false, 00:05:40.179 "seek_data": false, 00:05:40.179 "copy": true, 00:05:40.179 "nvme_iov_md": false 00:05:40.179 }, 00:05:40.179 "memory_domains": [ 00:05:40.179 { 00:05:40.179 "dma_device_id": "system", 00:05:40.179 "dma_device_type": 1 00:05:40.179 }, 00:05:40.179 { 00:05:40.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.179 "dma_device_type": 2 00:05:40.179 } 00:05:40.179 ], 00:05:40.179 "driver_specific": {} 00:05:40.179 }, 00:05:40.179 { 00:05:40.179 "name": "Passthru0", 00:05:40.179 "aliases": [ 00:05:40.179 "af14846e-3219-579a-a5c4-6f8e5f8a2990" 00:05:40.179 ], 00:05:40.179 "product_name": "passthru", 00:05:40.179 "block_size": 512, 00:05:40.179 "num_blocks": 16384, 00:05:40.179 "uuid": "af14846e-3219-579a-a5c4-6f8e5f8a2990", 00:05:40.179 "assigned_rate_limits": { 00:05:40.179 "rw_ios_per_sec": 0, 00:05:40.179 "rw_mbytes_per_sec": 0, 00:05:40.179 "r_mbytes_per_sec": 0, 00:05:40.179 "w_mbytes_per_sec": 0 00:05:40.179 }, 00:05:40.179 "claimed": false, 00:05:40.179 "zoned": false, 00:05:40.179 "supported_io_types": { 00:05:40.179 "read": true, 00:05:40.179 "write": true, 00:05:40.179 "unmap": true, 00:05:40.179 "flush": true, 00:05:40.179 "reset": true, 00:05:40.179 "nvme_admin": false, 00:05:40.179 "nvme_io": false, 00:05:40.179 "nvme_io_md": false, 00:05:40.179 "write_zeroes": true, 00:05:40.179 "zcopy": true, 00:05:40.179 "get_zone_info": false, 00:05:40.179 "zone_management": false, 00:05:40.179 "zone_append": false, 00:05:40.179 "compare": false, 00:05:40.179 "compare_and_write": false, 00:05:40.179 "abort": true, 00:05:40.179 "seek_hole": false, 00:05:40.179 "seek_data": false, 00:05:40.179 "copy": true, 00:05:40.179 "nvme_iov_md": false 00:05:40.179 }, 00:05:40.179 "memory_domains": [ 00:05:40.179 { 00:05:40.179 "dma_device_id": "system", 00:05:40.179 "dma_device_type": 1 00:05:40.179 }, 00:05:40.179 { 00:05:40.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.179 "dma_device_type": 2 00:05:40.179 } 00:05:40.179 ], 00:05:40.179 "driver_specific": { 00:05:40.179 "passthru": { 00:05:40.179 "name": "Passthru0", 00:05:40.179 "base_bdev_name": "Malloc2" 00:05:40.179 } 00:05:40.179 } 00:05:40.179 } 00:05:40.179 ]' 00:05:40.179 04:21:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:40.179 04:21:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:40.179 04:21:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:40.179 04:21:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.179 04:21:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.179 04:21:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.179 04:21:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:40.179 04:21:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.179 04:21:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.179 04:21:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.179 04:21:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:40.179 04:21:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.179 04:21:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.179 04:21:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.180 04:21:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:40.180 04:21:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:40.180 ************************************ 00:05:40.180 END TEST rpc_daemon_integrity 00:05:40.180 ************************************ 00:05:40.180 04:21:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:40.180 00:05:40.180 real 0m0.318s 00:05:40.180 user 0m0.187s 00:05:40.180 sys 0m0.051s 00:05:40.180 04:21:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.180 04:21:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.439 04:21:40 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:40.439 04:21:40 rpc -- rpc/rpc.sh@84 -- # killprocess 70688 00:05:40.439 04:21:40 rpc -- common/autotest_common.sh@954 -- # '[' -z 70688 ']' 00:05:40.440 04:21:40 rpc -- common/autotest_common.sh@958 -- # kill -0 70688 00:05:40.440 04:21:40 rpc -- common/autotest_common.sh@959 -- # uname 00:05:40.440 04:21:40 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.440 04:21:40 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70688 00:05:40.440 killing process with pid 70688 00:05:40.440 04:21:40 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:40.440 04:21:40 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:40.440 04:21:40 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70688' 00:05:40.440 04:21:40 rpc -- common/autotest_common.sh@973 -- # kill 70688 00:05:40.440 04:21:40 rpc -- common/autotest_common.sh@978 -- # wait 70688 00:05:40.700 00:05:40.700 real 0m2.781s 00:05:40.700 user 0m3.334s 00:05:40.700 sys 0m0.824s 00:05:40.700 04:21:40 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.700 04:21:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.700 ************************************ 00:05:40.700 END TEST rpc 00:05:40.700 ************************************ 00:05:40.700 04:21:40 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:40.700 04:21:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.700 04:21:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.700 04:21:40 -- common/autotest_common.sh@10 -- # set +x 00:05:40.700 ************************************ 00:05:40.700 START TEST skip_rpc 00:05:40.700 ************************************ 00:05:40.700 04:21:40 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:40.960 * Looking for test storage... 00:05:40.960 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:40.960 04:21:40 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:40.960 04:21:40 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:40.960 04:21:40 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:40.960 04:21:40 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:40.960 04:21:40 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.960 04:21:40 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.960 04:21:40 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.960 04:21:40 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.960 04:21:40 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.960 04:21:40 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.960 04:21:40 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.960 04:21:40 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.960 04:21:40 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.960 04:21:40 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.960 04:21:40 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.960 04:21:40 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:40.960 04:21:40 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:40.960 04:21:40 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.960 04:21:40 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.960 04:21:40 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:40.960 04:21:40 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:40.960 04:21:40 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.960 04:21:40 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:40.960 04:21:40 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.960 04:21:40 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:40.960 04:21:40 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:40.960 04:21:40 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.960 04:21:40 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:40.960 04:21:40 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.960 04:21:40 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.960 04:21:40 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.960 04:21:40 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:40.960 04:21:40 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.960 04:21:40 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:40.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.960 --rc genhtml_branch_coverage=1 00:05:40.960 --rc genhtml_function_coverage=1 00:05:40.960 --rc genhtml_legend=1 00:05:40.960 --rc geninfo_all_blocks=1 00:05:40.960 --rc geninfo_unexecuted_blocks=1 00:05:40.960 00:05:40.960 ' 00:05:40.960 04:21:40 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:40.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.960 --rc genhtml_branch_coverage=1 00:05:40.960 --rc genhtml_function_coverage=1 00:05:40.960 --rc genhtml_legend=1 00:05:40.960 --rc geninfo_all_blocks=1 00:05:40.960 --rc geninfo_unexecuted_blocks=1 00:05:40.960 00:05:40.960 ' 00:05:40.960 04:21:40 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:40.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.960 --rc genhtml_branch_coverage=1 00:05:40.960 --rc genhtml_function_coverage=1 00:05:40.960 --rc genhtml_legend=1 00:05:40.960 --rc geninfo_all_blocks=1 00:05:40.960 --rc geninfo_unexecuted_blocks=1 00:05:40.960 00:05:40.960 ' 00:05:40.960 04:21:40 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:40.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.960 --rc genhtml_branch_coverage=1 00:05:40.960 --rc genhtml_function_coverage=1 00:05:40.960 --rc genhtml_legend=1 00:05:40.960 --rc geninfo_all_blocks=1 00:05:40.960 --rc geninfo_unexecuted_blocks=1 00:05:40.960 00:05:40.960 ' 00:05:40.960 04:21:40 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:40.960 04:21:40 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:40.960 04:21:40 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:40.960 04:21:40 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.960 04:21:40 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.960 04:21:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.960 ************************************ 00:05:40.960 START TEST skip_rpc 00:05:40.960 ************************************ 00:05:40.960 04:21:40 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:40.960 04:21:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=70890 00:05:40.960 04:21:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:40.960 04:21:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.960 04:21:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:41.220 [2024-12-13 04:21:41.012689] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:41.220 [2024-12-13 04:21:41.012852] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70890 ] 00:05:41.220 [2024-12-13 04:21:41.169607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.220 [2024-12-13 04:21:41.195564] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.536 04:21:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:46.536 04:21:45 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:46.536 04:21:45 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:46.536 04:21:45 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:46.536 04:21:45 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:46.536 04:21:45 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:46.536 04:21:45 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:46.536 04:21:45 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:46.536 04:21:45 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.536 04:21:45 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.536 04:21:45 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:46.536 04:21:45 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:46.536 04:21:45 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:46.536 04:21:45 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:46.536 04:21:45 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:46.536 04:21:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:46.536 04:21:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 70890 00:05:46.536 04:21:45 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 70890 ']' 00:05:46.536 04:21:45 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 70890 00:05:46.536 04:21:45 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:46.536 04:21:45 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:46.536 04:21:45 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70890 00:05:46.536 killing process with pid 70890 00:05:46.536 04:21:45 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:46.536 04:21:45 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:46.536 04:21:45 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70890' 00:05:46.536 04:21:45 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 70890 00:05:46.536 04:21:45 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 70890 00:05:46.809 00:05:46.809 real 0m5.675s 00:05:46.809 user 0m5.261s 00:05:46.809 sys 0m0.336s 00:05:46.809 ************************************ 00:05:46.809 END TEST skip_rpc 00:05:46.809 ************************************ 00:05:46.809 04:21:46 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.809 04:21:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.809 04:21:46 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:46.809 04:21:46 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.809 04:21:46 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.809 04:21:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.809 ************************************ 00:05:46.809 START TEST skip_rpc_with_json 00:05:46.809 ************************************ 00:05:46.809 04:21:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:46.809 04:21:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:46.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.809 04:21:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=70983 00:05:46.809 04:21:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.809 04:21:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:46.809 04:21:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 70983 00:05:46.809 04:21:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 70983 ']' 00:05:46.809 04:21:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.809 04:21:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.809 04:21:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.809 04:21:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.809 04:21:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:46.809 [2024-12-13 04:21:46.753667] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:46.809 [2024-12-13 04:21:46.753880] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70983 ] 00:05:47.069 [2024-12-13 04:21:46.908464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.069 [2024-12-13 04:21:46.948917] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.639 04:21:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.640 04:21:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:47.640 04:21:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:47.640 04:21:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.640 04:21:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:47.640 [2024-12-13 04:21:47.578123] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:47.640 request: 00:05:47.640 { 00:05:47.640 "trtype": "tcp", 00:05:47.640 "method": "nvmf_get_transports", 00:05:47.640 "req_id": 1 00:05:47.640 } 00:05:47.640 Got JSON-RPC error response 00:05:47.640 response: 00:05:47.640 { 00:05:47.640 "code": -19, 00:05:47.640 "message": "No such device" 00:05:47.640 } 00:05:47.640 04:21:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:47.640 04:21:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:47.640 04:21:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.640 04:21:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:47.640 [2024-12-13 04:21:47.590260] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:47.640 04:21:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.640 04:21:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:47.640 04:21:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.640 04:21:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:47.900 04:21:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.900 04:21:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:47.900 { 00:05:47.900 "subsystems": [ 00:05:47.900 { 00:05:47.900 "subsystem": "fsdev", 00:05:47.900 "config": [ 00:05:47.900 { 00:05:47.900 "method": "fsdev_set_opts", 00:05:47.900 "params": { 00:05:47.900 "fsdev_io_pool_size": 65535, 00:05:47.900 "fsdev_io_cache_size": 256 00:05:47.900 } 00:05:47.900 } 00:05:47.900 ] 00:05:47.900 }, 00:05:47.900 { 00:05:47.900 "subsystem": "keyring", 00:05:47.900 "config": [] 00:05:47.900 }, 00:05:47.900 { 00:05:47.900 "subsystem": "iobuf", 00:05:47.900 "config": [ 00:05:47.900 { 00:05:47.900 "method": "iobuf_set_options", 00:05:47.900 "params": { 00:05:47.900 "small_pool_count": 8192, 00:05:47.900 "large_pool_count": 1024, 00:05:47.900 "small_bufsize": 8192, 00:05:47.900 "large_bufsize": 135168, 00:05:47.900 "enable_numa": false 00:05:47.900 } 00:05:47.900 } 00:05:47.900 ] 00:05:47.900 }, 00:05:47.900 { 00:05:47.900 "subsystem": "sock", 00:05:47.900 "config": [ 00:05:47.900 { 00:05:47.900 "method": "sock_set_default_impl", 00:05:47.900 "params": { 00:05:47.900 "impl_name": "posix" 00:05:47.900 } 00:05:47.900 }, 00:05:47.900 { 00:05:47.900 "method": "sock_impl_set_options", 00:05:47.900 "params": { 00:05:47.900 "impl_name": "ssl", 00:05:47.900 "recv_buf_size": 4096, 00:05:47.900 "send_buf_size": 4096, 00:05:47.900 "enable_recv_pipe": true, 00:05:47.900 "enable_quickack": false, 00:05:47.900 "enable_placement_id": 0, 00:05:47.900 "enable_zerocopy_send_server": true, 00:05:47.900 "enable_zerocopy_send_client": false, 00:05:47.900 "zerocopy_threshold": 0, 00:05:47.900 "tls_version": 0, 00:05:47.900 "enable_ktls": false 00:05:47.900 } 00:05:47.900 }, 00:05:47.900 { 00:05:47.900 "method": "sock_impl_set_options", 00:05:47.900 "params": { 00:05:47.900 "impl_name": "posix", 00:05:47.900 "recv_buf_size": 2097152, 00:05:47.900 "send_buf_size": 2097152, 00:05:47.900 "enable_recv_pipe": true, 00:05:47.900 "enable_quickack": false, 00:05:47.900 "enable_placement_id": 0, 00:05:47.900 "enable_zerocopy_send_server": true, 00:05:47.900 "enable_zerocopy_send_client": false, 00:05:47.900 "zerocopy_threshold": 0, 00:05:47.900 "tls_version": 0, 00:05:47.900 "enable_ktls": false 00:05:47.900 } 00:05:47.900 } 00:05:47.901 ] 00:05:47.901 }, 00:05:47.901 { 00:05:47.901 "subsystem": "vmd", 00:05:47.901 "config": [] 00:05:47.901 }, 00:05:47.901 { 00:05:47.901 "subsystem": "accel", 00:05:47.901 "config": [ 00:05:47.901 { 00:05:47.901 "method": "accel_set_options", 00:05:47.901 "params": { 00:05:47.901 "small_cache_size": 128, 00:05:47.901 "large_cache_size": 16, 00:05:47.901 "task_count": 2048, 00:05:47.901 "sequence_count": 2048, 00:05:47.901 "buf_count": 2048 00:05:47.901 } 00:05:47.901 } 00:05:47.901 ] 00:05:47.901 }, 00:05:47.901 { 00:05:47.901 "subsystem": "bdev", 00:05:47.901 "config": [ 00:05:47.901 { 00:05:47.901 "method": "bdev_set_options", 00:05:47.901 "params": { 00:05:47.901 "bdev_io_pool_size": 65535, 00:05:47.901 "bdev_io_cache_size": 256, 00:05:47.901 "bdev_auto_examine": true, 00:05:47.901 "iobuf_small_cache_size": 128, 00:05:47.901 "iobuf_large_cache_size": 16 00:05:47.901 } 00:05:47.901 }, 00:05:47.901 { 00:05:47.901 "method": "bdev_raid_set_options", 00:05:47.901 "params": { 00:05:47.901 "process_window_size_kb": 1024, 00:05:47.901 "process_max_bandwidth_mb_sec": 0 00:05:47.901 } 00:05:47.901 }, 00:05:47.901 { 00:05:47.901 "method": "bdev_iscsi_set_options", 00:05:47.901 "params": { 00:05:47.901 "timeout_sec": 30 00:05:47.901 } 00:05:47.901 }, 00:05:47.901 { 00:05:47.901 "method": "bdev_nvme_set_options", 00:05:47.901 "params": { 00:05:47.901 "action_on_timeout": "none", 00:05:47.901 "timeout_us": 0, 00:05:47.901 "timeout_admin_us": 0, 00:05:47.901 "keep_alive_timeout_ms": 10000, 00:05:47.901 "arbitration_burst": 0, 00:05:47.901 "low_priority_weight": 0, 00:05:47.901 "medium_priority_weight": 0, 00:05:47.901 "high_priority_weight": 0, 00:05:47.901 "nvme_adminq_poll_period_us": 10000, 00:05:47.901 "nvme_ioq_poll_period_us": 0, 00:05:47.901 "io_queue_requests": 0, 00:05:47.901 "delay_cmd_submit": true, 00:05:47.901 "transport_retry_count": 4, 00:05:47.901 "bdev_retry_count": 3, 00:05:47.901 "transport_ack_timeout": 0, 00:05:47.901 "ctrlr_loss_timeout_sec": 0, 00:05:47.901 "reconnect_delay_sec": 0, 00:05:47.901 "fast_io_fail_timeout_sec": 0, 00:05:47.901 "disable_auto_failback": false, 00:05:47.901 "generate_uuids": false, 00:05:47.901 "transport_tos": 0, 00:05:47.901 "nvme_error_stat": false, 00:05:47.901 "rdma_srq_size": 0, 00:05:47.901 "io_path_stat": false, 00:05:47.901 "allow_accel_sequence": false, 00:05:47.901 "rdma_max_cq_size": 0, 00:05:47.901 "rdma_cm_event_timeout_ms": 0, 00:05:47.901 "dhchap_digests": [ 00:05:47.901 "sha256", 00:05:47.901 "sha384", 00:05:47.901 "sha512" 00:05:47.901 ], 00:05:47.901 "dhchap_dhgroups": [ 00:05:47.901 "null", 00:05:47.901 "ffdhe2048", 00:05:47.901 "ffdhe3072", 00:05:47.901 "ffdhe4096", 00:05:47.901 "ffdhe6144", 00:05:47.901 "ffdhe8192" 00:05:47.901 ], 00:05:47.901 "rdma_umr_per_io": false 00:05:47.901 } 00:05:47.901 }, 00:05:47.901 { 00:05:47.901 "method": "bdev_nvme_set_hotplug", 00:05:47.901 "params": { 00:05:47.901 "period_us": 100000, 00:05:47.901 "enable": false 00:05:47.901 } 00:05:47.901 }, 00:05:47.901 { 00:05:47.901 "method": "bdev_wait_for_examine" 00:05:47.901 } 00:05:47.901 ] 00:05:47.901 }, 00:05:47.901 { 00:05:47.901 "subsystem": "scsi", 00:05:47.901 "config": null 00:05:47.901 }, 00:05:47.901 { 00:05:47.901 "subsystem": "scheduler", 00:05:47.901 "config": [ 00:05:47.901 { 00:05:47.901 "method": "framework_set_scheduler", 00:05:47.901 "params": { 00:05:47.901 "name": "static" 00:05:47.901 } 00:05:47.901 } 00:05:47.901 ] 00:05:47.901 }, 00:05:47.901 { 00:05:47.901 "subsystem": "vhost_scsi", 00:05:47.901 "config": [] 00:05:47.901 }, 00:05:47.901 { 00:05:47.901 "subsystem": "vhost_blk", 00:05:47.901 "config": [] 00:05:47.901 }, 00:05:47.901 { 00:05:47.901 "subsystem": "ublk", 00:05:47.901 "config": [] 00:05:47.901 }, 00:05:47.901 { 00:05:47.901 "subsystem": "nbd", 00:05:47.901 "config": [] 00:05:47.901 }, 00:05:47.901 { 00:05:47.901 "subsystem": "nvmf", 00:05:47.901 "config": [ 00:05:47.901 { 00:05:47.901 "method": "nvmf_set_config", 00:05:47.901 "params": { 00:05:47.901 "discovery_filter": "match_any", 00:05:47.901 "admin_cmd_passthru": { 00:05:47.901 "identify_ctrlr": false 00:05:47.901 }, 00:05:47.901 "dhchap_digests": [ 00:05:47.901 "sha256", 00:05:47.901 "sha384", 00:05:47.901 "sha512" 00:05:47.901 ], 00:05:47.901 "dhchap_dhgroups": [ 00:05:47.901 "null", 00:05:47.901 "ffdhe2048", 00:05:47.901 "ffdhe3072", 00:05:47.901 "ffdhe4096", 00:05:47.901 "ffdhe6144", 00:05:47.901 "ffdhe8192" 00:05:47.901 ] 00:05:47.901 } 00:05:47.901 }, 00:05:47.901 { 00:05:47.901 "method": "nvmf_set_max_subsystems", 00:05:47.901 "params": { 00:05:47.901 "max_subsystems": 1024 00:05:47.901 } 00:05:47.901 }, 00:05:47.901 { 00:05:47.901 "method": "nvmf_set_crdt", 00:05:47.901 "params": { 00:05:47.901 "crdt1": 0, 00:05:47.901 "crdt2": 0, 00:05:47.901 "crdt3": 0 00:05:47.901 } 00:05:47.901 }, 00:05:47.901 { 00:05:47.901 "method": "nvmf_create_transport", 00:05:47.901 "params": { 00:05:47.901 "trtype": "TCP", 00:05:47.901 "max_queue_depth": 128, 00:05:47.901 "max_io_qpairs_per_ctrlr": 127, 00:05:47.901 "in_capsule_data_size": 4096, 00:05:47.901 "max_io_size": 131072, 00:05:47.901 "io_unit_size": 131072, 00:05:47.901 "max_aq_depth": 128, 00:05:47.901 "num_shared_buffers": 511, 00:05:47.901 "buf_cache_size": 4294967295, 00:05:47.901 "dif_insert_or_strip": false, 00:05:47.901 "zcopy": false, 00:05:47.901 "c2h_success": true, 00:05:47.901 "sock_priority": 0, 00:05:47.901 "abort_timeout_sec": 1, 00:05:47.901 "ack_timeout": 0, 00:05:47.901 "data_wr_pool_size": 0 00:05:47.901 } 00:05:47.901 } 00:05:47.901 ] 00:05:47.901 }, 00:05:47.901 { 00:05:47.901 "subsystem": "iscsi", 00:05:47.901 "config": [ 00:05:47.901 { 00:05:47.901 "method": "iscsi_set_options", 00:05:47.901 "params": { 00:05:47.901 "node_base": "iqn.2016-06.io.spdk", 00:05:47.901 "max_sessions": 128, 00:05:47.901 "max_connections_per_session": 2, 00:05:47.901 "max_queue_depth": 64, 00:05:47.901 "default_time2wait": 2, 00:05:47.901 "default_time2retain": 20, 00:05:47.901 "first_burst_length": 8192, 00:05:47.901 "immediate_data": true, 00:05:47.901 "allow_duplicated_isid": false, 00:05:47.901 "error_recovery_level": 0, 00:05:47.901 "nop_timeout": 60, 00:05:47.901 "nop_in_interval": 30, 00:05:47.901 "disable_chap": false, 00:05:47.901 "require_chap": false, 00:05:47.901 "mutual_chap": false, 00:05:47.901 "chap_group": 0, 00:05:47.901 "max_large_datain_per_connection": 64, 00:05:47.901 "max_r2t_per_connection": 4, 00:05:47.901 "pdu_pool_size": 36864, 00:05:47.901 "immediate_data_pool_size": 16384, 00:05:47.901 "data_out_pool_size": 2048 00:05:47.901 } 00:05:47.901 } 00:05:47.901 ] 00:05:47.901 } 00:05:47.901 ] 00:05:47.901 } 00:05:47.901 04:21:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:47.901 04:21:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 70983 00:05:47.901 04:21:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 70983 ']' 00:05:47.901 04:21:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 70983 00:05:47.901 04:21:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:47.901 04:21:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.901 04:21:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70983 00:05:47.901 04:21:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.901 04:21:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.901 04:21:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70983' 00:05:47.901 killing process with pid 70983 00:05:47.901 04:21:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 70983 00:05:47.901 04:21:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 70983 00:05:48.471 04:21:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=71011 00:05:48.471 04:21:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:48.471 04:21:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:53.754 04:21:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 71011 00:05:53.754 04:21:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 71011 ']' 00:05:53.754 04:21:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 71011 00:05:53.754 04:21:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:53.754 04:21:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.754 04:21:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71011 00:05:53.754 killing process with pid 71011 00:05:53.754 04:21:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:53.754 04:21:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:53.754 04:21:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71011' 00:05:53.754 04:21:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 71011 00:05:53.754 04:21:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 71011 00:05:54.323 04:21:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:54.323 04:21:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:54.323 ************************************ 00:05:54.323 END TEST skip_rpc_with_json 00:05:54.323 ************************************ 00:05:54.323 00:05:54.323 real 0m7.431s 00:05:54.323 user 0m6.685s 00:05:54.323 sys 0m1.017s 00:05:54.323 04:21:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.323 04:21:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:54.323 04:21:54 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:54.323 04:21:54 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.323 04:21:54 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.323 04:21:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.323 ************************************ 00:05:54.323 START TEST skip_rpc_with_delay 00:05:54.323 ************************************ 00:05:54.323 04:21:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:54.323 04:21:54 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:54.323 04:21:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:54.323 04:21:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:54.323 04:21:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:54.323 04:21:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:54.323 04:21:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:54.323 04:21:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:54.323 04:21:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:54.323 04:21:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:54.323 04:21:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:54.323 04:21:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:54.323 04:21:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:54.323 [2024-12-13 04:21:54.282767] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:54.583 04:21:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:54.583 04:21:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:54.583 04:21:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:54.583 ************************************ 00:05:54.583 END TEST skip_rpc_with_delay 00:05:54.583 ************************************ 00:05:54.583 04:21:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:54.583 00:05:54.583 real 0m0.193s 00:05:54.583 user 0m0.086s 00:05:54.583 sys 0m0.104s 00:05:54.583 04:21:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.583 04:21:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:54.583 04:21:54 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:54.583 04:21:54 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:54.583 04:21:54 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:54.583 04:21:54 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.583 04:21:54 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.583 04:21:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.583 ************************************ 00:05:54.583 START TEST exit_on_failed_rpc_init 00:05:54.583 ************************************ 00:05:54.583 04:21:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:54.583 04:21:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=71123 00:05:54.583 04:21:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:54.583 04:21:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 71123 00:05:54.583 04:21:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 71123 ']' 00:05:54.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.583 04:21:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.583 04:21:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.583 04:21:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.583 04:21:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.583 04:21:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:54.583 [2024-12-13 04:21:54.570675] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:54.583 [2024-12-13 04:21:54.570836] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71123 ] 00:05:54.843 [2024-12-13 04:21:54.726994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.843 [2024-12-13 04:21:54.766072] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.410 04:21:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.410 04:21:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:55.410 04:21:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:55.410 04:21:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:55.410 04:21:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:55.410 04:21:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:55.410 04:21:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:55.410 04:21:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.410 04:21:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:55.669 04:21:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.669 04:21:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:55.669 04:21:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.669 04:21:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:55.669 04:21:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:55.669 04:21:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:55.669 [2024-12-13 04:21:55.527901] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:55.669 [2024-12-13 04:21:55.528112] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71141 ] 00:05:55.669 [2024-12-13 04:21:55.674120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.928 [2024-12-13 04:21:55.708075] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.929 [2024-12-13 04:21:55.708279] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:55.929 [2024-12-13 04:21:55.708377] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:55.929 [2024-12-13 04:21:55.708422] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:55.929 04:21:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:55.929 04:21:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:55.929 04:21:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:55.929 04:21:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:55.929 04:21:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:55.929 04:21:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:55.929 04:21:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:55.929 04:21:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 71123 00:05:55.929 04:21:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 71123 ']' 00:05:55.929 04:21:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 71123 00:05:55.929 04:21:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:55.929 04:21:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.929 04:21:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71123 00:05:55.929 04:21:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.929 04:21:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.929 04:21:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71123' 00:05:55.929 killing process with pid 71123 00:05:55.929 04:21:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 71123 00:05:55.929 04:21:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 71123 00:05:56.498 ************************************ 00:05:56.498 END TEST exit_on_failed_rpc_init 00:05:56.498 ************************************ 00:05:56.498 00:05:56.498 real 0m2.014s 00:05:56.498 user 0m2.010s 00:05:56.498 sys 0m0.661s 00:05:56.498 04:21:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.498 04:21:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:56.498 04:21:56 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:56.757 ************************************ 00:05:56.757 END TEST skip_rpc 00:05:56.757 ************************************ 00:05:56.757 00:05:56.757 real 0m15.833s 00:05:56.757 user 0m14.246s 00:05:56.757 sys 0m2.446s 00:05:56.757 04:21:56 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.757 04:21:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.757 04:21:56 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:56.757 04:21:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.757 04:21:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.757 04:21:56 -- common/autotest_common.sh@10 -- # set +x 00:05:56.757 ************************************ 00:05:56.757 START TEST rpc_client 00:05:56.757 ************************************ 00:05:56.757 04:21:56 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:56.757 * Looking for test storage... 00:05:56.757 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:56.757 04:21:56 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:56.757 04:21:56 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:05:56.757 04:21:56 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:57.017 04:21:56 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:57.017 04:21:56 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.017 04:21:56 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.017 04:21:56 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.017 04:21:56 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.017 04:21:56 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.017 04:21:56 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.017 04:21:56 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.017 04:21:56 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.017 04:21:56 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.017 04:21:56 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.017 04:21:56 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.017 04:21:56 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:57.017 04:21:56 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:57.017 04:21:56 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.017 04:21:56 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.017 04:21:56 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:57.017 04:21:56 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:57.017 04:21:56 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.017 04:21:56 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:57.017 04:21:56 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.017 04:21:56 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:57.017 04:21:56 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:57.017 04:21:56 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.017 04:21:56 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:57.017 04:21:56 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.017 04:21:56 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.017 04:21:56 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.017 04:21:56 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:57.017 04:21:56 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.017 04:21:56 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:57.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.017 --rc genhtml_branch_coverage=1 00:05:57.017 --rc genhtml_function_coverage=1 00:05:57.017 --rc genhtml_legend=1 00:05:57.017 --rc geninfo_all_blocks=1 00:05:57.017 --rc geninfo_unexecuted_blocks=1 00:05:57.017 00:05:57.017 ' 00:05:57.017 04:21:56 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:57.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.017 --rc genhtml_branch_coverage=1 00:05:57.017 --rc genhtml_function_coverage=1 00:05:57.017 --rc genhtml_legend=1 00:05:57.017 --rc geninfo_all_blocks=1 00:05:57.017 --rc geninfo_unexecuted_blocks=1 00:05:57.017 00:05:57.017 ' 00:05:57.017 04:21:56 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:57.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.017 --rc genhtml_branch_coverage=1 00:05:57.017 --rc genhtml_function_coverage=1 00:05:57.017 --rc genhtml_legend=1 00:05:57.017 --rc geninfo_all_blocks=1 00:05:57.017 --rc geninfo_unexecuted_blocks=1 00:05:57.017 00:05:57.017 ' 00:05:57.017 04:21:56 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:57.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.017 --rc genhtml_branch_coverage=1 00:05:57.017 --rc genhtml_function_coverage=1 00:05:57.017 --rc genhtml_legend=1 00:05:57.017 --rc geninfo_all_blocks=1 00:05:57.017 --rc geninfo_unexecuted_blocks=1 00:05:57.017 00:05:57.017 ' 00:05:57.017 04:21:56 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:57.017 OK 00:05:57.017 04:21:56 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:57.017 ************************************ 00:05:57.017 END TEST rpc_client 00:05:57.017 ************************************ 00:05:57.017 00:05:57.017 real 0m0.308s 00:05:57.017 user 0m0.151s 00:05:57.017 sys 0m0.172s 00:05:57.017 04:21:56 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.017 04:21:56 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:57.017 04:21:56 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:57.017 04:21:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.017 04:21:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.017 04:21:56 -- common/autotest_common.sh@10 -- # set +x 00:05:57.017 ************************************ 00:05:57.017 START TEST json_config 00:05:57.017 ************************************ 00:05:57.017 04:21:56 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:57.277 04:21:57 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:57.277 04:21:57 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:05:57.277 04:21:57 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:57.277 04:21:57 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:57.277 04:21:57 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.277 04:21:57 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.277 04:21:57 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.277 04:21:57 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.277 04:21:57 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.277 04:21:57 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.277 04:21:57 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.277 04:21:57 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.277 04:21:57 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.277 04:21:57 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.277 04:21:57 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.277 04:21:57 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:57.277 04:21:57 json_config -- scripts/common.sh@345 -- # : 1 00:05:57.277 04:21:57 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.277 04:21:57 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.277 04:21:57 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:57.277 04:21:57 json_config -- scripts/common.sh@353 -- # local d=1 00:05:57.277 04:21:57 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.277 04:21:57 json_config -- scripts/common.sh@355 -- # echo 1 00:05:57.277 04:21:57 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.277 04:21:57 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:57.277 04:21:57 json_config -- scripts/common.sh@353 -- # local d=2 00:05:57.277 04:21:57 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.277 04:21:57 json_config -- scripts/common.sh@355 -- # echo 2 00:05:57.277 04:21:57 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.277 04:21:57 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.277 04:21:57 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.277 04:21:57 json_config -- scripts/common.sh@368 -- # return 0 00:05:57.277 04:21:57 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.277 04:21:57 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:57.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.277 --rc genhtml_branch_coverage=1 00:05:57.277 --rc genhtml_function_coverage=1 00:05:57.277 --rc genhtml_legend=1 00:05:57.277 --rc geninfo_all_blocks=1 00:05:57.277 --rc geninfo_unexecuted_blocks=1 00:05:57.277 00:05:57.277 ' 00:05:57.278 04:21:57 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:57.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.278 --rc genhtml_branch_coverage=1 00:05:57.278 --rc genhtml_function_coverage=1 00:05:57.278 --rc genhtml_legend=1 00:05:57.278 --rc geninfo_all_blocks=1 00:05:57.278 --rc geninfo_unexecuted_blocks=1 00:05:57.278 00:05:57.278 ' 00:05:57.278 04:21:57 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:57.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.278 --rc genhtml_branch_coverage=1 00:05:57.278 --rc genhtml_function_coverage=1 00:05:57.278 --rc genhtml_legend=1 00:05:57.278 --rc geninfo_all_blocks=1 00:05:57.278 --rc geninfo_unexecuted_blocks=1 00:05:57.278 00:05:57.278 ' 00:05:57.278 04:21:57 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:57.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.278 --rc genhtml_branch_coverage=1 00:05:57.278 --rc genhtml_function_coverage=1 00:05:57.278 --rc genhtml_legend=1 00:05:57.278 --rc geninfo_all_blocks=1 00:05:57.278 --rc geninfo_unexecuted_blocks=1 00:05:57.278 00:05:57.278 ' 00:05:57.278 04:21:57 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:57.278 04:21:57 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:57.278 04:21:57 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:57.278 04:21:57 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:57.278 04:21:57 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:57.278 04:21:57 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:57.278 04:21:57 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:57.278 04:21:57 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:57.278 04:21:57 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:57.278 04:21:57 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:57.278 04:21:57 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:57.278 04:21:57 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:57.278 04:21:57 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ab0d7d39-54b9-46b6-a8ab-fec082cf4a1e 00:05:57.278 04:21:57 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=ab0d7d39-54b9-46b6-a8ab-fec082cf4a1e 00:05:57.278 04:21:57 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:57.278 04:21:57 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:57.278 04:21:57 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:57.278 04:21:57 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:57.278 04:21:57 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:57.278 04:21:57 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:57.278 04:21:57 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:57.278 04:21:57 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:57.278 04:21:57 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:57.278 04:21:57 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.278 04:21:57 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.278 04:21:57 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.278 04:21:57 json_config -- paths/export.sh@5 -- # export PATH 00:05:57.278 04:21:57 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.278 04:21:57 json_config -- nvmf/common.sh@51 -- # : 0 00:05:57.278 04:21:57 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:57.278 04:21:57 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:57.278 04:21:57 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:57.278 04:21:57 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:57.278 04:21:57 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:57.278 04:21:57 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:57.278 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:57.278 04:21:57 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:57.278 04:21:57 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:57.278 04:21:57 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:57.278 04:21:57 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:57.278 04:21:57 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:57.278 04:21:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:57.278 04:21:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:57.278 04:21:57 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:57.278 04:21:57 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:57.278 WARNING: No tests are enabled so not running JSON configuration tests 00:05:57.278 04:21:57 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:57.278 00:05:57.278 real 0m0.231s 00:05:57.278 user 0m0.149s 00:05:57.278 sys 0m0.085s 00:05:57.278 04:21:57 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.278 04:21:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.278 ************************************ 00:05:57.278 END TEST json_config 00:05:57.278 ************************************ 00:05:57.278 04:21:57 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:57.278 04:21:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.278 04:21:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.278 04:21:57 -- common/autotest_common.sh@10 -- # set +x 00:05:57.278 ************************************ 00:05:57.278 START TEST json_config_extra_key 00:05:57.278 ************************************ 00:05:57.278 04:21:57 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:57.538 04:21:57 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:57.538 04:21:57 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:57.538 04:21:57 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:57.538 04:21:57 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:57.538 04:21:57 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.538 04:21:57 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.538 04:21:57 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.538 04:21:57 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.538 04:21:57 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.538 04:21:57 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.538 04:21:57 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.538 04:21:57 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.538 04:21:57 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.538 04:21:57 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.538 04:21:57 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.538 04:21:57 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:57.538 04:21:57 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:57.538 04:21:57 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.538 04:21:57 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.538 04:21:57 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:57.538 04:21:57 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:57.538 04:21:57 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.538 04:21:57 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:57.538 04:21:57 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.538 04:21:57 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:57.538 04:21:57 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:57.538 04:21:57 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.538 04:21:57 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:57.538 04:21:57 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.538 04:21:57 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.538 04:21:57 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.538 04:21:57 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:57.538 04:21:57 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.538 04:21:57 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:57.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.538 --rc genhtml_branch_coverage=1 00:05:57.538 --rc genhtml_function_coverage=1 00:05:57.538 --rc genhtml_legend=1 00:05:57.538 --rc geninfo_all_blocks=1 00:05:57.538 --rc geninfo_unexecuted_blocks=1 00:05:57.538 00:05:57.538 ' 00:05:57.538 04:21:57 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:57.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.538 --rc genhtml_branch_coverage=1 00:05:57.538 --rc genhtml_function_coverage=1 00:05:57.538 --rc genhtml_legend=1 00:05:57.538 --rc geninfo_all_blocks=1 00:05:57.538 --rc geninfo_unexecuted_blocks=1 00:05:57.538 00:05:57.538 ' 00:05:57.538 04:21:57 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:57.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.538 --rc genhtml_branch_coverage=1 00:05:57.538 --rc genhtml_function_coverage=1 00:05:57.538 --rc genhtml_legend=1 00:05:57.538 --rc geninfo_all_blocks=1 00:05:57.538 --rc geninfo_unexecuted_blocks=1 00:05:57.538 00:05:57.538 ' 00:05:57.538 04:21:57 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:57.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.538 --rc genhtml_branch_coverage=1 00:05:57.538 --rc genhtml_function_coverage=1 00:05:57.538 --rc genhtml_legend=1 00:05:57.538 --rc geninfo_all_blocks=1 00:05:57.538 --rc geninfo_unexecuted_blocks=1 00:05:57.538 00:05:57.538 ' 00:05:57.538 04:21:57 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:57.538 04:21:57 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:57.538 04:21:57 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:57.538 04:21:57 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:57.538 04:21:57 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:57.538 04:21:57 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:57.538 04:21:57 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:57.538 04:21:57 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:57.538 04:21:57 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:57.539 04:21:57 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:57.539 04:21:57 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:57.539 04:21:57 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:57.539 04:21:57 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ab0d7d39-54b9-46b6-a8ab-fec082cf4a1e 00:05:57.539 04:21:57 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=ab0d7d39-54b9-46b6-a8ab-fec082cf4a1e 00:05:57.539 04:21:57 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:57.539 04:21:57 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:57.539 04:21:57 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:57.539 04:21:57 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:57.539 04:21:57 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:57.539 04:21:57 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:57.539 04:21:57 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:57.539 04:21:57 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:57.539 04:21:57 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:57.539 04:21:57 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.539 04:21:57 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.539 04:21:57 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.539 04:21:57 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:57.539 04:21:57 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.539 04:21:57 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:57.539 04:21:57 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:57.539 04:21:57 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:57.539 04:21:57 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:57.539 04:21:57 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:57.539 04:21:57 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:57.539 04:21:57 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:57.539 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:57.539 04:21:57 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:57.539 04:21:57 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:57.539 04:21:57 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:57.539 04:21:57 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:57.539 04:21:57 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:57.539 04:21:57 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:57.539 04:21:57 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:57.539 04:21:57 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:57.539 04:21:57 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:57.539 04:21:57 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:57.539 04:21:57 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:57.539 04:21:57 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:57.539 INFO: launching applications... 00:05:57.539 04:21:57 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:57.539 04:21:57 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:57.539 04:21:57 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:57.539 04:21:57 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:57.539 04:21:57 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:57.539 04:21:57 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:57.539 04:21:57 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:57.539 04:21:57 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:57.539 04:21:57 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:57.539 04:21:57 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:57.539 04:21:57 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=71329 00:05:57.539 04:21:57 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:57.539 Waiting for target to run... 00:05:57.539 04:21:57 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 71329 /var/tmp/spdk_tgt.sock 00:05:57.539 04:21:57 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:57.539 04:21:57 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 71329 ']' 00:05:57.539 04:21:57 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:57.539 04:21:57 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.539 04:21:57 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:57.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:57.539 04:21:57 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.539 04:21:57 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:57.799 [2024-12-13 04:21:57.585346] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:57.799 [2024-12-13 04:21:57.585585] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71329 ] 00:05:58.059 [2024-12-13 04:21:57.955682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.059 [2024-12-13 04:21:57.979865] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.628 04:21:58 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.628 04:21:58 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:58.628 00:05:58.628 INFO: shutting down applications... 00:05:58.628 04:21:58 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:58.628 04:21:58 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:58.628 04:21:58 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:58.628 04:21:58 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:58.628 04:21:58 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:58.628 04:21:58 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 71329 ]] 00:05:58.628 04:21:58 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 71329 00:05:58.628 04:21:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:58.628 04:21:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:58.628 04:21:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71329 00:05:58.628 04:21:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:59.197 04:21:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:59.197 04:21:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:59.197 04:21:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71329 00:05:59.197 04:21:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:59.458 04:21:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:59.458 04:21:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:59.458 04:21:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71329 00:05:59.458 04:21:59 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:59.458 04:21:59 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:59.458 04:21:59 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:59.458 04:21:59 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:59.458 SPDK target shutdown done 00:05:59.458 04:21:59 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:59.458 Success 00:05:59.458 00:05:59.458 real 0m2.175s 00:05:59.458 user 0m1.634s 00:05:59.458 sys 0m0.499s 00:05:59.458 04:21:59 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.458 04:21:59 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:59.458 ************************************ 00:05:59.458 END TEST json_config_extra_key 00:05:59.458 ************************************ 00:05:59.718 04:21:59 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:59.718 04:21:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.718 04:21:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.718 04:21:59 -- common/autotest_common.sh@10 -- # set +x 00:05:59.718 ************************************ 00:05:59.718 START TEST alias_rpc 00:05:59.718 ************************************ 00:05:59.718 04:21:59 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:59.718 * Looking for test storage... 00:05:59.718 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:59.718 04:21:59 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:59.718 04:21:59 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:59.719 04:21:59 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:59.719 04:21:59 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:59.719 04:21:59 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:59.719 04:21:59 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:59.719 04:21:59 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:59.719 04:21:59 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.719 04:21:59 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:59.719 04:21:59 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:59.719 04:21:59 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:59.719 04:21:59 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:59.719 04:21:59 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:59.719 04:21:59 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:59.719 04:21:59 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:59.719 04:21:59 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:59.719 04:21:59 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:59.719 04:21:59 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:59.719 04:21:59 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.719 04:21:59 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:59.719 04:21:59 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:59.719 04:21:59 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.719 04:21:59 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:59.719 04:21:59 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:59.719 04:21:59 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:59.719 04:21:59 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:59.719 04:21:59 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.719 04:21:59 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:59.719 04:21:59 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:59.719 04:21:59 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:59.719 04:21:59 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:59.719 04:21:59 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:59.719 04:21:59 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.719 04:21:59 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:59.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.719 --rc genhtml_branch_coverage=1 00:05:59.719 --rc genhtml_function_coverage=1 00:05:59.719 --rc genhtml_legend=1 00:05:59.719 --rc geninfo_all_blocks=1 00:05:59.719 --rc geninfo_unexecuted_blocks=1 00:05:59.719 00:05:59.719 ' 00:05:59.719 04:21:59 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:59.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.719 --rc genhtml_branch_coverage=1 00:05:59.719 --rc genhtml_function_coverage=1 00:05:59.719 --rc genhtml_legend=1 00:05:59.719 --rc geninfo_all_blocks=1 00:05:59.719 --rc geninfo_unexecuted_blocks=1 00:05:59.719 00:05:59.719 ' 00:05:59.719 04:21:59 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:59.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.719 --rc genhtml_branch_coverage=1 00:05:59.719 --rc genhtml_function_coverage=1 00:05:59.719 --rc genhtml_legend=1 00:05:59.719 --rc geninfo_all_blocks=1 00:05:59.719 --rc geninfo_unexecuted_blocks=1 00:05:59.719 00:05:59.719 ' 00:05:59.719 04:21:59 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:59.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.719 --rc genhtml_branch_coverage=1 00:05:59.719 --rc genhtml_function_coverage=1 00:05:59.719 --rc genhtml_legend=1 00:05:59.719 --rc geninfo_all_blocks=1 00:05:59.719 --rc geninfo_unexecuted_blocks=1 00:05:59.719 00:05:59.719 ' 00:05:59.719 04:21:59 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:59.979 04:21:59 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=71409 00:05:59.979 04:21:59 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:59.979 04:21:59 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 71409 00:05:59.979 04:21:59 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 71409 ']' 00:05:59.979 04:21:59 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.979 04:21:59 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.979 04:21:59 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.979 04:21:59 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.979 04:21:59 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.979 [2024-12-13 04:21:59.827110] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:59.979 [2024-12-13 04:21:59.827239] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71409 ] 00:05:59.979 [2024-12-13 04:21:59.978282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.238 [2024-12-13 04:22:00.016734] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.807 04:22:00 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.807 04:22:00 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:00.807 04:22:00 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:01.067 04:22:00 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 71409 00:06:01.067 04:22:00 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 71409 ']' 00:06:01.067 04:22:00 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 71409 00:06:01.067 04:22:00 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:01.067 04:22:00 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.067 04:22:00 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71409 00:06:01.067 killing process with pid 71409 00:06:01.067 04:22:00 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.067 04:22:00 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.067 04:22:00 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71409' 00:06:01.067 04:22:00 alias_rpc -- common/autotest_common.sh@973 -- # kill 71409 00:06:01.067 04:22:00 alias_rpc -- common/autotest_common.sh@978 -- # wait 71409 00:06:01.635 ************************************ 00:06:01.635 END TEST alias_rpc 00:06:01.635 ************************************ 00:06:01.635 00:06:01.635 real 0m1.989s 00:06:01.635 user 0m1.872s 00:06:01.635 sys 0m0.636s 00:06:01.635 04:22:01 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.635 04:22:01 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.635 04:22:01 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:01.635 04:22:01 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:01.635 04:22:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.635 04:22:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.635 04:22:01 -- common/autotest_common.sh@10 -- # set +x 00:06:01.635 ************************************ 00:06:01.635 START TEST spdkcli_tcp 00:06:01.635 ************************************ 00:06:01.635 04:22:01 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:01.895 * Looking for test storage... 00:06:01.895 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:01.895 04:22:01 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:01.895 04:22:01 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:01.895 04:22:01 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:01.895 04:22:01 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:01.895 04:22:01 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:01.895 04:22:01 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:01.895 04:22:01 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:01.895 04:22:01 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.895 04:22:01 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:01.895 04:22:01 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:01.895 04:22:01 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:01.895 04:22:01 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:01.895 04:22:01 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:01.895 04:22:01 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:01.895 04:22:01 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:01.895 04:22:01 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:01.895 04:22:01 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:01.895 04:22:01 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:01.896 04:22:01 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.896 04:22:01 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:01.896 04:22:01 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:01.896 04:22:01 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.896 04:22:01 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:01.896 04:22:01 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:01.896 04:22:01 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:01.896 04:22:01 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:01.896 04:22:01 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.896 04:22:01 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:01.896 04:22:01 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:01.896 04:22:01 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:01.896 04:22:01 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:01.896 04:22:01 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:01.896 04:22:01 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.896 04:22:01 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:01.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.896 --rc genhtml_branch_coverage=1 00:06:01.896 --rc genhtml_function_coverage=1 00:06:01.896 --rc genhtml_legend=1 00:06:01.896 --rc geninfo_all_blocks=1 00:06:01.896 --rc geninfo_unexecuted_blocks=1 00:06:01.896 00:06:01.896 ' 00:06:01.896 04:22:01 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:01.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.896 --rc genhtml_branch_coverage=1 00:06:01.896 --rc genhtml_function_coverage=1 00:06:01.896 --rc genhtml_legend=1 00:06:01.896 --rc geninfo_all_blocks=1 00:06:01.896 --rc geninfo_unexecuted_blocks=1 00:06:01.896 00:06:01.896 ' 00:06:01.896 04:22:01 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:01.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.896 --rc genhtml_branch_coverage=1 00:06:01.896 --rc genhtml_function_coverage=1 00:06:01.896 --rc genhtml_legend=1 00:06:01.896 --rc geninfo_all_blocks=1 00:06:01.896 --rc geninfo_unexecuted_blocks=1 00:06:01.896 00:06:01.896 ' 00:06:01.896 04:22:01 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:01.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.896 --rc genhtml_branch_coverage=1 00:06:01.896 --rc genhtml_function_coverage=1 00:06:01.896 --rc genhtml_legend=1 00:06:01.896 --rc geninfo_all_blocks=1 00:06:01.896 --rc geninfo_unexecuted_blocks=1 00:06:01.896 00:06:01.896 ' 00:06:01.896 04:22:01 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:01.896 04:22:01 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:01.896 04:22:01 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:01.896 04:22:01 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:01.896 04:22:01 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:01.896 04:22:01 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:01.896 04:22:01 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:01.896 04:22:01 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:01.896 04:22:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:01.896 04:22:01 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=71494 00:06:01.896 04:22:01 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:01.896 04:22:01 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 71494 00:06:01.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.896 04:22:01 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 71494 ']' 00:06:01.896 04:22:01 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.896 04:22:01 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.896 04:22:01 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.896 04:22:01 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.896 04:22:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:01.896 [2024-12-13 04:22:01.879322] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:01.896 [2024-12-13 04:22:01.879460] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71494 ] 00:06:02.155 [2024-12-13 04:22:02.035484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:02.155 [2024-12-13 04:22:02.077225] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.155 [2024-12-13 04:22:02.077249] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.727 04:22:02 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.727 04:22:02 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:02.727 04:22:02 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=71511 00:06:02.727 04:22:02 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:02.727 04:22:02 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:02.990 [ 00:06:02.990 "bdev_malloc_delete", 00:06:02.990 "bdev_malloc_create", 00:06:02.990 "bdev_null_resize", 00:06:02.990 "bdev_null_delete", 00:06:02.990 "bdev_null_create", 00:06:02.990 "bdev_nvme_cuse_unregister", 00:06:02.990 "bdev_nvme_cuse_register", 00:06:02.990 "bdev_opal_new_user", 00:06:02.990 "bdev_opal_set_lock_state", 00:06:02.990 "bdev_opal_delete", 00:06:02.990 "bdev_opal_get_info", 00:06:02.990 "bdev_opal_create", 00:06:02.990 "bdev_nvme_opal_revert", 00:06:02.990 "bdev_nvme_opal_init", 00:06:02.990 "bdev_nvme_send_cmd", 00:06:02.990 "bdev_nvme_set_keys", 00:06:02.990 "bdev_nvme_get_path_iostat", 00:06:02.990 "bdev_nvme_get_mdns_discovery_info", 00:06:02.990 "bdev_nvme_stop_mdns_discovery", 00:06:02.990 "bdev_nvme_start_mdns_discovery", 00:06:02.990 "bdev_nvme_set_multipath_policy", 00:06:02.990 "bdev_nvme_set_preferred_path", 00:06:02.990 "bdev_nvme_get_io_paths", 00:06:02.990 "bdev_nvme_remove_error_injection", 00:06:02.990 "bdev_nvme_add_error_injection", 00:06:02.990 "bdev_nvme_get_discovery_info", 00:06:02.990 "bdev_nvme_stop_discovery", 00:06:02.990 "bdev_nvme_start_discovery", 00:06:02.990 "bdev_nvme_get_controller_health_info", 00:06:02.990 "bdev_nvme_disable_controller", 00:06:02.990 "bdev_nvme_enable_controller", 00:06:02.990 "bdev_nvme_reset_controller", 00:06:02.990 "bdev_nvme_get_transport_statistics", 00:06:02.990 "bdev_nvme_apply_firmware", 00:06:02.990 "bdev_nvme_detach_controller", 00:06:02.990 "bdev_nvme_get_controllers", 00:06:02.990 "bdev_nvme_attach_controller", 00:06:02.990 "bdev_nvme_set_hotplug", 00:06:02.990 "bdev_nvme_set_options", 00:06:02.990 "bdev_passthru_delete", 00:06:02.991 "bdev_passthru_create", 00:06:02.991 "bdev_lvol_set_parent_bdev", 00:06:02.991 "bdev_lvol_set_parent", 00:06:02.991 "bdev_lvol_check_shallow_copy", 00:06:02.991 "bdev_lvol_start_shallow_copy", 00:06:02.991 "bdev_lvol_grow_lvstore", 00:06:02.991 "bdev_lvol_get_lvols", 00:06:02.991 "bdev_lvol_get_lvstores", 00:06:02.991 "bdev_lvol_delete", 00:06:02.991 "bdev_lvol_set_read_only", 00:06:02.991 "bdev_lvol_resize", 00:06:02.991 "bdev_lvol_decouple_parent", 00:06:02.991 "bdev_lvol_inflate", 00:06:02.991 "bdev_lvol_rename", 00:06:02.991 "bdev_lvol_clone_bdev", 00:06:02.991 "bdev_lvol_clone", 00:06:02.991 "bdev_lvol_snapshot", 00:06:02.991 "bdev_lvol_create", 00:06:02.991 "bdev_lvol_delete_lvstore", 00:06:02.991 "bdev_lvol_rename_lvstore", 00:06:02.991 "bdev_lvol_create_lvstore", 00:06:02.991 "bdev_raid_set_options", 00:06:02.991 "bdev_raid_remove_base_bdev", 00:06:02.991 "bdev_raid_add_base_bdev", 00:06:02.991 "bdev_raid_delete", 00:06:02.991 "bdev_raid_create", 00:06:02.991 "bdev_raid_get_bdevs", 00:06:02.991 "bdev_error_inject_error", 00:06:02.991 "bdev_error_delete", 00:06:02.991 "bdev_error_create", 00:06:02.991 "bdev_split_delete", 00:06:02.991 "bdev_split_create", 00:06:02.991 "bdev_delay_delete", 00:06:02.991 "bdev_delay_create", 00:06:02.991 "bdev_delay_update_latency", 00:06:02.991 "bdev_zone_block_delete", 00:06:02.991 "bdev_zone_block_create", 00:06:02.991 "blobfs_create", 00:06:02.991 "blobfs_detect", 00:06:02.991 "blobfs_set_cache_size", 00:06:02.991 "bdev_aio_delete", 00:06:02.991 "bdev_aio_rescan", 00:06:02.991 "bdev_aio_create", 00:06:02.991 "bdev_ftl_set_property", 00:06:02.991 "bdev_ftl_get_properties", 00:06:02.991 "bdev_ftl_get_stats", 00:06:02.991 "bdev_ftl_unmap", 00:06:02.991 "bdev_ftl_unload", 00:06:02.991 "bdev_ftl_delete", 00:06:02.991 "bdev_ftl_load", 00:06:02.991 "bdev_ftl_create", 00:06:02.991 "bdev_virtio_attach_controller", 00:06:02.991 "bdev_virtio_scsi_get_devices", 00:06:02.991 "bdev_virtio_detach_controller", 00:06:02.991 "bdev_virtio_blk_set_hotplug", 00:06:02.991 "bdev_iscsi_delete", 00:06:02.991 "bdev_iscsi_create", 00:06:02.991 "bdev_iscsi_set_options", 00:06:02.991 "accel_error_inject_error", 00:06:02.991 "ioat_scan_accel_module", 00:06:02.991 "dsa_scan_accel_module", 00:06:02.991 "iaa_scan_accel_module", 00:06:02.991 "keyring_file_remove_key", 00:06:02.991 "keyring_file_add_key", 00:06:02.991 "keyring_linux_set_options", 00:06:02.991 "fsdev_aio_delete", 00:06:02.991 "fsdev_aio_create", 00:06:02.991 "iscsi_get_histogram", 00:06:02.991 "iscsi_enable_histogram", 00:06:02.991 "iscsi_set_options", 00:06:02.991 "iscsi_get_auth_groups", 00:06:02.991 "iscsi_auth_group_remove_secret", 00:06:02.991 "iscsi_auth_group_add_secret", 00:06:02.991 "iscsi_delete_auth_group", 00:06:02.991 "iscsi_create_auth_group", 00:06:02.991 "iscsi_set_discovery_auth", 00:06:02.991 "iscsi_get_options", 00:06:02.991 "iscsi_target_node_request_logout", 00:06:02.991 "iscsi_target_node_set_redirect", 00:06:02.991 "iscsi_target_node_set_auth", 00:06:02.991 "iscsi_target_node_add_lun", 00:06:02.991 "iscsi_get_stats", 00:06:02.991 "iscsi_get_connections", 00:06:02.991 "iscsi_portal_group_set_auth", 00:06:02.991 "iscsi_start_portal_group", 00:06:02.991 "iscsi_delete_portal_group", 00:06:02.991 "iscsi_create_portal_group", 00:06:02.991 "iscsi_get_portal_groups", 00:06:02.991 "iscsi_delete_target_node", 00:06:02.991 "iscsi_target_node_remove_pg_ig_maps", 00:06:02.991 "iscsi_target_node_add_pg_ig_maps", 00:06:02.991 "iscsi_create_target_node", 00:06:02.991 "iscsi_get_target_nodes", 00:06:02.991 "iscsi_delete_initiator_group", 00:06:02.991 "iscsi_initiator_group_remove_initiators", 00:06:02.991 "iscsi_initiator_group_add_initiators", 00:06:02.991 "iscsi_create_initiator_group", 00:06:02.991 "iscsi_get_initiator_groups", 00:06:02.991 "nvmf_set_crdt", 00:06:02.991 "nvmf_set_config", 00:06:02.991 "nvmf_set_max_subsystems", 00:06:02.991 "nvmf_stop_mdns_prr", 00:06:02.991 "nvmf_publish_mdns_prr", 00:06:02.991 "nvmf_subsystem_get_listeners", 00:06:02.991 "nvmf_subsystem_get_qpairs", 00:06:02.991 "nvmf_subsystem_get_controllers", 00:06:02.991 "nvmf_get_stats", 00:06:02.991 "nvmf_get_transports", 00:06:02.991 "nvmf_create_transport", 00:06:02.991 "nvmf_get_targets", 00:06:02.991 "nvmf_delete_target", 00:06:02.991 "nvmf_create_target", 00:06:02.991 "nvmf_subsystem_allow_any_host", 00:06:02.991 "nvmf_subsystem_set_keys", 00:06:02.991 "nvmf_subsystem_remove_host", 00:06:02.991 "nvmf_subsystem_add_host", 00:06:02.991 "nvmf_ns_remove_host", 00:06:02.991 "nvmf_ns_add_host", 00:06:02.991 "nvmf_subsystem_remove_ns", 00:06:02.991 "nvmf_subsystem_set_ns_ana_group", 00:06:02.991 "nvmf_subsystem_add_ns", 00:06:02.991 "nvmf_subsystem_listener_set_ana_state", 00:06:02.991 "nvmf_discovery_get_referrals", 00:06:02.991 "nvmf_discovery_remove_referral", 00:06:02.991 "nvmf_discovery_add_referral", 00:06:02.991 "nvmf_subsystem_remove_listener", 00:06:02.991 "nvmf_subsystem_add_listener", 00:06:02.991 "nvmf_delete_subsystem", 00:06:02.991 "nvmf_create_subsystem", 00:06:02.991 "nvmf_get_subsystems", 00:06:02.991 "env_dpdk_get_mem_stats", 00:06:02.991 "nbd_get_disks", 00:06:02.991 "nbd_stop_disk", 00:06:02.991 "nbd_start_disk", 00:06:02.991 "ublk_recover_disk", 00:06:02.991 "ublk_get_disks", 00:06:02.991 "ublk_stop_disk", 00:06:02.991 "ublk_start_disk", 00:06:02.991 "ublk_destroy_target", 00:06:02.991 "ublk_create_target", 00:06:02.991 "virtio_blk_create_transport", 00:06:02.991 "virtio_blk_get_transports", 00:06:02.991 "vhost_controller_set_coalescing", 00:06:02.991 "vhost_get_controllers", 00:06:02.991 "vhost_delete_controller", 00:06:02.991 "vhost_create_blk_controller", 00:06:02.991 "vhost_scsi_controller_remove_target", 00:06:02.991 "vhost_scsi_controller_add_target", 00:06:02.991 "vhost_start_scsi_controller", 00:06:02.991 "vhost_create_scsi_controller", 00:06:02.991 "thread_set_cpumask", 00:06:02.991 "scheduler_set_options", 00:06:02.991 "framework_get_governor", 00:06:02.991 "framework_get_scheduler", 00:06:02.991 "framework_set_scheduler", 00:06:02.991 "framework_get_reactors", 00:06:02.991 "thread_get_io_channels", 00:06:02.991 "thread_get_pollers", 00:06:02.991 "thread_get_stats", 00:06:02.991 "framework_monitor_context_switch", 00:06:02.991 "spdk_kill_instance", 00:06:02.991 "log_enable_timestamps", 00:06:02.991 "log_get_flags", 00:06:02.991 "log_clear_flag", 00:06:02.991 "log_set_flag", 00:06:02.991 "log_get_level", 00:06:02.991 "log_set_level", 00:06:02.991 "log_get_print_level", 00:06:02.991 "log_set_print_level", 00:06:02.991 "framework_enable_cpumask_locks", 00:06:02.991 "framework_disable_cpumask_locks", 00:06:02.991 "framework_wait_init", 00:06:02.991 "framework_start_init", 00:06:02.991 "scsi_get_devices", 00:06:02.991 "bdev_get_histogram", 00:06:02.991 "bdev_enable_histogram", 00:06:02.991 "bdev_set_qos_limit", 00:06:02.991 "bdev_set_qd_sampling_period", 00:06:02.991 "bdev_get_bdevs", 00:06:02.991 "bdev_reset_iostat", 00:06:02.991 "bdev_get_iostat", 00:06:02.991 "bdev_examine", 00:06:02.991 "bdev_wait_for_examine", 00:06:02.991 "bdev_set_options", 00:06:02.991 "accel_get_stats", 00:06:02.991 "accel_set_options", 00:06:02.991 "accel_set_driver", 00:06:02.991 "accel_crypto_key_destroy", 00:06:02.991 "accel_crypto_keys_get", 00:06:02.991 "accel_crypto_key_create", 00:06:02.991 "accel_assign_opc", 00:06:02.991 "accel_get_module_info", 00:06:02.991 "accel_get_opc_assignments", 00:06:02.991 "vmd_rescan", 00:06:02.991 "vmd_remove_device", 00:06:02.991 "vmd_enable", 00:06:02.991 "sock_get_default_impl", 00:06:02.991 "sock_set_default_impl", 00:06:02.991 "sock_impl_set_options", 00:06:02.991 "sock_impl_get_options", 00:06:02.991 "iobuf_get_stats", 00:06:02.991 "iobuf_set_options", 00:06:02.991 "keyring_get_keys", 00:06:02.991 "framework_get_pci_devices", 00:06:02.991 "framework_get_config", 00:06:02.991 "framework_get_subsystems", 00:06:02.991 "fsdev_set_opts", 00:06:02.991 "fsdev_get_opts", 00:06:02.991 "trace_get_info", 00:06:02.991 "trace_get_tpoint_group_mask", 00:06:02.991 "trace_disable_tpoint_group", 00:06:02.991 "trace_enable_tpoint_group", 00:06:02.991 "trace_clear_tpoint_mask", 00:06:02.991 "trace_set_tpoint_mask", 00:06:02.991 "notify_get_notifications", 00:06:02.991 "notify_get_types", 00:06:02.991 "spdk_get_version", 00:06:02.991 "rpc_get_methods" 00:06:02.991 ] 00:06:02.991 04:22:02 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:02.991 04:22:02 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:02.991 04:22:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:02.991 04:22:02 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:02.991 04:22:02 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 71494 00:06:02.991 04:22:02 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 71494 ']' 00:06:02.992 04:22:02 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 71494 00:06:02.992 04:22:02 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:02.992 04:22:02 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.992 04:22:02 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71494 00:06:03.250 killing process with pid 71494 00:06:03.250 04:22:03 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:03.250 04:22:03 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:03.250 04:22:03 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71494' 00:06:03.250 04:22:03 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 71494 00:06:03.250 04:22:03 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 71494 00:06:03.818 ************************************ 00:06:03.818 END TEST spdkcli_tcp 00:06:03.818 ************************************ 00:06:03.818 00:06:03.818 real 0m2.069s 00:06:03.818 user 0m3.380s 00:06:03.818 sys 0m0.711s 00:06:03.818 04:22:03 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.818 04:22:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:03.818 04:22:03 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:03.818 04:22:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.818 04:22:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.818 04:22:03 -- common/autotest_common.sh@10 -- # set +x 00:06:03.818 ************************************ 00:06:03.818 START TEST dpdk_mem_utility 00:06:03.818 ************************************ 00:06:03.818 04:22:03 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:03.818 * Looking for test storage... 00:06:03.818 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:03.818 04:22:03 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:03.818 04:22:03 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:06:03.818 04:22:03 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:04.077 04:22:03 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:04.077 04:22:03 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.077 04:22:03 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.077 04:22:03 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.077 04:22:03 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.077 04:22:03 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.077 04:22:03 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.077 04:22:03 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.077 04:22:03 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.077 04:22:03 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.077 04:22:03 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.077 04:22:03 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.077 04:22:03 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:04.077 04:22:03 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:04.077 04:22:03 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.077 04:22:03 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.077 04:22:03 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:04.077 04:22:03 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:04.077 04:22:03 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.077 04:22:03 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:04.077 04:22:03 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.077 04:22:03 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:04.077 04:22:03 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:04.078 04:22:03 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.078 04:22:03 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:04.078 04:22:03 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.078 04:22:03 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.078 04:22:03 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.078 04:22:03 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:04.078 04:22:03 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.078 04:22:03 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:04.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.078 --rc genhtml_branch_coverage=1 00:06:04.078 --rc genhtml_function_coverage=1 00:06:04.078 --rc genhtml_legend=1 00:06:04.078 --rc geninfo_all_blocks=1 00:06:04.078 --rc geninfo_unexecuted_blocks=1 00:06:04.078 00:06:04.078 ' 00:06:04.078 04:22:03 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:04.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.078 --rc genhtml_branch_coverage=1 00:06:04.078 --rc genhtml_function_coverage=1 00:06:04.078 --rc genhtml_legend=1 00:06:04.078 --rc geninfo_all_blocks=1 00:06:04.078 --rc geninfo_unexecuted_blocks=1 00:06:04.078 00:06:04.078 ' 00:06:04.078 04:22:03 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:04.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.078 --rc genhtml_branch_coverage=1 00:06:04.078 --rc genhtml_function_coverage=1 00:06:04.078 --rc genhtml_legend=1 00:06:04.078 --rc geninfo_all_blocks=1 00:06:04.078 --rc geninfo_unexecuted_blocks=1 00:06:04.078 00:06:04.078 ' 00:06:04.078 04:22:03 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:04.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.078 --rc genhtml_branch_coverage=1 00:06:04.078 --rc genhtml_function_coverage=1 00:06:04.078 --rc genhtml_legend=1 00:06:04.078 --rc geninfo_all_blocks=1 00:06:04.078 --rc geninfo_unexecuted_blocks=1 00:06:04.078 00:06:04.078 ' 00:06:04.078 04:22:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:04.078 04:22:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=71594 00:06:04.078 04:22:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:04.078 04:22:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 71594 00:06:04.078 04:22:03 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 71594 ']' 00:06:04.078 04:22:03 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.078 04:22:03 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.078 04:22:03 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.078 04:22:03 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.078 04:22:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:04.078 [2024-12-13 04:22:04.042392] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:04.078 [2024-12-13 04:22:04.042669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71594 ] 00:06:04.337 [2024-12-13 04:22:04.200938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.337 [2024-12-13 04:22:04.239287] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.906 04:22:04 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.906 04:22:04 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:04.906 04:22:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:04.906 04:22:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:04.906 04:22:04 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.906 04:22:04 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:04.906 { 00:06:04.906 "filename": "/tmp/spdk_mem_dump.txt" 00:06:04.906 } 00:06:04.906 04:22:04 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.906 04:22:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:04.906 DPDK memory size 818.000000 MiB in 1 heap(s) 00:06:04.906 1 heaps totaling size 818.000000 MiB 00:06:04.906 size: 818.000000 MiB heap id: 0 00:06:04.906 end heaps---------- 00:06:04.906 9 mempools totaling size 603.782043 MiB 00:06:04.906 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:04.906 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:04.906 size: 100.555481 MiB name: bdev_io_71594 00:06:04.906 size: 50.003479 MiB name: msgpool_71594 00:06:04.906 size: 36.509338 MiB name: fsdev_io_71594 00:06:04.906 size: 21.763794 MiB name: PDU_Pool 00:06:04.906 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:04.906 size: 4.133484 MiB name: evtpool_71594 00:06:04.906 size: 0.026123 MiB name: Session_Pool 00:06:04.906 end mempools------- 00:06:04.906 6 memzones totaling size 4.142822 MiB 00:06:04.906 size: 1.000366 MiB name: RG_ring_0_71594 00:06:04.906 size: 1.000366 MiB name: RG_ring_1_71594 00:06:04.906 size: 1.000366 MiB name: RG_ring_4_71594 00:06:04.906 size: 1.000366 MiB name: RG_ring_5_71594 00:06:04.906 size: 0.125366 MiB name: RG_ring_2_71594 00:06:04.906 size: 0.015991 MiB name: RG_ring_3_71594 00:06:04.906 end memzones------- 00:06:04.906 04:22:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:05.166 heap id: 0 total size: 818.000000 MiB number of busy elements: 313 number of free elements: 15 00:06:05.166 list of free elements. size: 10.803223 MiB 00:06:05.166 element at address: 0x200019200000 with size: 0.999878 MiB 00:06:05.166 element at address: 0x200019400000 with size: 0.999878 MiB 00:06:05.166 element at address: 0x200032000000 with size: 0.994446 MiB 00:06:05.166 element at address: 0x200000400000 with size: 0.993958 MiB 00:06:05.166 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:05.166 element at address: 0x200012c00000 with size: 0.944275 MiB 00:06:05.166 element at address: 0x200019600000 with size: 0.936584 MiB 00:06:05.166 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:05.166 element at address: 0x20001ae00000 with size: 0.568420 MiB 00:06:05.166 element at address: 0x20000a600000 with size: 0.488892 MiB 00:06:05.166 element at address: 0x200000c00000 with size: 0.486267 MiB 00:06:05.166 element at address: 0x200019800000 with size: 0.485657 MiB 00:06:05.166 element at address: 0x200003e00000 with size: 0.480286 MiB 00:06:05.166 element at address: 0x200028200000 with size: 0.395752 MiB 00:06:05.166 element at address: 0x200000800000 with size: 0.351746 MiB 00:06:05.166 list of standard malloc elements. size: 199.267883 MiB 00:06:05.166 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:05.166 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:05.166 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:05.166 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:06:05.166 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:06:05.166 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:05.166 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:06:05.166 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:05.166 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:06:05.166 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:05.166 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:05.166 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:06:05.166 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:06:05.166 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:06:05.166 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:06:05.166 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:06:05.166 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:06:05.166 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:06:05.166 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:06:05.166 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:06:05.166 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:06:05.166 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:06:05.166 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:06:05.166 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:06:05.166 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:06:05.167 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:06:05.167 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:06:05.167 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:06:05.167 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:06:05.167 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:06:05.167 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:06:05.167 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:06:05.167 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:06:05.167 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:06:05.167 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:06:05.167 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:06:05.167 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:05.167 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:05.167 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:06:05.167 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:05.167 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20000085e580 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20000087e840 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20000087e900 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20000087f080 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20000087f140 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20000087f200 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20000087f380 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20000087f440 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20000087f500 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:05.167 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:05.167 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:05.167 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:05.167 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:06:05.167 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:06:05.167 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:06:05.167 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:06:05.167 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:06:05.168 element at address: 0x200028265500 with size: 0.000183 MiB 00:06:05.168 element at address: 0x2000282655c0 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826c1c0 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826c3c0 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826c480 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826c540 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826c600 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826c780 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826c840 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826c900 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826d080 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826d140 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826d200 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826d380 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826d440 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826d500 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826d680 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826d740 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826d800 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826d980 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826da40 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826db00 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826de00 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826df80 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826e040 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826e100 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826e280 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826e340 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826e400 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826e580 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826e640 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826e700 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826e880 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826e940 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826f000 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826f180 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826f240 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826f300 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826f480 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826f540 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826f600 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826f780 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826f840 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826f900 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:06:05.168 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:06:05.168 list of memzone associated elements. size: 607.928894 MiB 00:06:05.168 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:06:05.168 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:05.168 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:06:05.168 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:05.168 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:06:05.168 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_71594_0 00:06:05.168 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:05.168 associated memzone info: size: 48.002930 MiB name: MP_msgpool_71594_0 00:06:05.168 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:05.168 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_71594_0 00:06:05.168 element at address: 0x2000199be940 with size: 20.255554 MiB 00:06:05.168 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:05.168 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:06:05.168 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:05.168 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:05.168 associated memzone info: size: 3.000122 MiB name: MP_evtpool_71594_0 00:06:05.168 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:05.168 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_71594 00:06:05.168 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:05.168 associated memzone info: size: 1.007996 MiB name: MP_evtpool_71594 00:06:05.168 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:05.168 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:05.169 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:06:05.169 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:05.169 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:05.169 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:05.169 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:05.169 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:05.169 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:05.169 associated memzone info: size: 1.000366 MiB name: RG_ring_0_71594 00:06:05.169 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:05.169 associated memzone info: size: 1.000366 MiB name: RG_ring_1_71594 00:06:05.169 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:06:05.169 associated memzone info: size: 1.000366 MiB name: RG_ring_4_71594 00:06:05.169 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:06:05.169 associated memzone info: size: 1.000366 MiB name: RG_ring_5_71594 00:06:05.169 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:05.169 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_71594 00:06:05.169 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:05.169 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_71594 00:06:05.169 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:05.169 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:05.169 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:05.169 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:05.169 element at address: 0x20001987c540 with size: 0.250488 MiB 00:06:05.169 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:05.169 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:05.169 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_71594 00:06:05.169 element at address: 0x20000085e640 with size: 0.125488 MiB 00:06:05.169 associated memzone info: size: 0.125366 MiB name: RG_ring_2_71594 00:06:05.169 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:05.169 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:05.169 element at address: 0x200028265680 with size: 0.023743 MiB 00:06:05.169 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:05.169 element at address: 0x20000085a380 with size: 0.016113 MiB 00:06:05.169 associated memzone info: size: 0.015991 MiB name: RG_ring_3_71594 00:06:05.169 element at address: 0x20002826b7c0 with size: 0.002441 MiB 00:06:05.169 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:05.169 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:06:05.169 associated memzone info: size: 0.000183 MiB name: MP_msgpool_71594 00:06:05.169 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:05.169 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_71594 00:06:05.169 element at address: 0x20000085a180 with size: 0.000305 MiB 00:06:05.169 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_71594 00:06:05.169 element at address: 0x20002826c280 with size: 0.000305 MiB 00:06:05.169 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:05.169 04:22:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:05.169 04:22:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 71594 00:06:05.169 04:22:04 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 71594 ']' 00:06:05.169 04:22:04 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 71594 00:06:05.169 04:22:04 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:05.169 04:22:04 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.169 04:22:04 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71594 00:06:05.169 04:22:04 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.169 04:22:04 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.169 04:22:04 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71594' 00:06:05.169 killing process with pid 71594 00:06:05.169 04:22:04 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 71594 00:06:05.169 04:22:04 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 71594 00:06:05.735 00:06:05.735 real 0m1.891s 00:06:05.735 user 0m1.668s 00:06:05.735 sys 0m0.650s 00:06:05.735 04:22:05 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.735 04:22:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:05.735 ************************************ 00:06:05.735 END TEST dpdk_mem_utility 00:06:05.735 ************************************ 00:06:05.735 04:22:05 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:05.735 04:22:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.735 04:22:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.735 04:22:05 -- common/autotest_common.sh@10 -- # set +x 00:06:05.735 ************************************ 00:06:05.735 START TEST event 00:06:05.735 ************************************ 00:06:05.735 04:22:05 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:05.995 * Looking for test storage... 00:06:05.995 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:05.995 04:22:05 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:05.995 04:22:05 event -- common/autotest_common.sh@1711 -- # lcov --version 00:06:05.995 04:22:05 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:05.995 04:22:05 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:05.995 04:22:05 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.995 04:22:05 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.995 04:22:05 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.995 04:22:05 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.995 04:22:05 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.995 04:22:05 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.995 04:22:05 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.995 04:22:05 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.995 04:22:05 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.995 04:22:05 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.995 04:22:05 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.995 04:22:05 event -- scripts/common.sh@344 -- # case "$op" in 00:06:05.995 04:22:05 event -- scripts/common.sh@345 -- # : 1 00:06:05.995 04:22:05 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.995 04:22:05 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.995 04:22:05 event -- scripts/common.sh@365 -- # decimal 1 00:06:05.995 04:22:05 event -- scripts/common.sh@353 -- # local d=1 00:06:05.995 04:22:05 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.995 04:22:05 event -- scripts/common.sh@355 -- # echo 1 00:06:05.995 04:22:05 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.995 04:22:05 event -- scripts/common.sh@366 -- # decimal 2 00:06:05.995 04:22:05 event -- scripts/common.sh@353 -- # local d=2 00:06:05.995 04:22:05 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.995 04:22:05 event -- scripts/common.sh@355 -- # echo 2 00:06:05.995 04:22:05 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.995 04:22:05 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.995 04:22:05 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.995 04:22:05 event -- scripts/common.sh@368 -- # return 0 00:06:05.995 04:22:05 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.995 04:22:05 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:05.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.995 --rc genhtml_branch_coverage=1 00:06:05.995 --rc genhtml_function_coverage=1 00:06:05.995 --rc genhtml_legend=1 00:06:05.995 --rc geninfo_all_blocks=1 00:06:05.995 --rc geninfo_unexecuted_blocks=1 00:06:05.995 00:06:05.995 ' 00:06:05.995 04:22:05 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:05.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.995 --rc genhtml_branch_coverage=1 00:06:05.995 --rc genhtml_function_coverage=1 00:06:05.995 --rc genhtml_legend=1 00:06:05.995 --rc geninfo_all_blocks=1 00:06:05.995 --rc geninfo_unexecuted_blocks=1 00:06:05.995 00:06:05.995 ' 00:06:05.995 04:22:05 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:05.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.995 --rc genhtml_branch_coverage=1 00:06:05.995 --rc genhtml_function_coverage=1 00:06:05.995 --rc genhtml_legend=1 00:06:05.995 --rc geninfo_all_blocks=1 00:06:05.995 --rc geninfo_unexecuted_blocks=1 00:06:05.995 00:06:05.995 ' 00:06:05.995 04:22:05 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:05.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.995 --rc genhtml_branch_coverage=1 00:06:05.995 --rc genhtml_function_coverage=1 00:06:05.995 --rc genhtml_legend=1 00:06:05.995 --rc geninfo_all_blocks=1 00:06:05.995 --rc geninfo_unexecuted_blocks=1 00:06:05.995 00:06:05.995 ' 00:06:05.995 04:22:05 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:05.995 04:22:05 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:05.995 04:22:05 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:05.995 04:22:05 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:05.995 04:22:05 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.995 04:22:05 event -- common/autotest_common.sh@10 -- # set +x 00:06:05.995 ************************************ 00:06:05.995 START TEST event_perf 00:06:05.995 ************************************ 00:06:05.995 04:22:05 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:05.995 Running I/O for 1 seconds...[2024-12-13 04:22:05.930862] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:05.995 [2024-12-13 04:22:05.931031] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71680 ] 00:06:06.254 [2024-12-13 04:22:06.086757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:06.254 [2024-12-13 04:22:06.130257] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.255 [2024-12-13 04:22:06.130607] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:06.255 [2024-12-13 04:22:06.130495] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:06.255 [2024-12-13 04:22:06.130460] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.191 Running I/O for 1 seconds... 00:06:07.191 lcore 0: 89271 00:06:07.191 lcore 1: 89274 00:06:07.191 lcore 2: 89278 00:06:07.191 lcore 3: 89275 00:06:07.191 done. 00:06:07.450 00:06:07.450 real 0m1.323s 00:06:07.450 user 0m4.092s 00:06:07.450 sys 0m0.103s 00:06:07.450 04:22:07 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.450 04:22:07 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:07.450 ************************************ 00:06:07.450 END TEST event_perf 00:06:07.450 ************************************ 00:06:07.450 04:22:07 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:07.450 04:22:07 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:07.450 04:22:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.450 04:22:07 event -- common/autotest_common.sh@10 -- # set +x 00:06:07.450 ************************************ 00:06:07.450 START TEST event_reactor 00:06:07.450 ************************************ 00:06:07.450 04:22:07 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:07.450 [2024-12-13 04:22:07.327562] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:07.450 [2024-12-13 04:22:07.328042] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71724 ] 00:06:07.709 [2024-12-13 04:22:07.484803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.709 [2024-12-13 04:22:07.522022] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.646 test_start 00:06:08.646 oneshot 00:06:08.646 tick 100 00:06:08.646 tick 100 00:06:08.646 tick 250 00:06:08.646 tick 100 00:06:08.646 tick 100 00:06:08.646 tick 100 00:06:08.646 tick 250 00:06:08.646 tick 500 00:06:08.646 tick 100 00:06:08.646 tick 100 00:06:08.646 tick 250 00:06:08.646 tick 100 00:06:08.646 tick 100 00:06:08.646 test_end 00:06:08.646 00:06:08.646 real 0m1.318s 00:06:08.646 user 0m1.134s 00:06:08.646 sys 0m0.077s 00:06:08.646 04:22:08 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.646 04:22:08 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:08.646 ************************************ 00:06:08.646 END TEST event_reactor 00:06:08.646 ************************************ 00:06:08.904 04:22:08 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:08.904 04:22:08 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:08.904 04:22:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.904 04:22:08 event -- common/autotest_common.sh@10 -- # set +x 00:06:08.904 ************************************ 00:06:08.904 START TEST event_reactor_perf 00:06:08.904 ************************************ 00:06:08.904 04:22:08 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:08.904 [2024-12-13 04:22:08.717566] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:08.904 [2024-12-13 04:22:08.717695] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71756 ] 00:06:08.904 [2024-12-13 04:22:08.870093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.904 [2024-12-13 04:22:08.906416] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.282 test_start 00:06:10.282 test_end 00:06:10.282 Performance: 409053 events per second 00:06:10.282 00:06:10.282 real 0m1.313s 00:06:10.282 user 0m1.118s 00:06:10.282 sys 0m0.088s 00:06:10.282 04:22:09 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.282 04:22:09 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:10.282 ************************************ 00:06:10.282 END TEST event_reactor_perf 00:06:10.282 ************************************ 00:06:10.282 04:22:10 event -- event/event.sh@49 -- # uname -s 00:06:10.282 04:22:10 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:10.282 04:22:10 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:10.282 04:22:10 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.282 04:22:10 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.282 04:22:10 event -- common/autotest_common.sh@10 -- # set +x 00:06:10.282 ************************************ 00:06:10.282 START TEST event_scheduler 00:06:10.282 ************************************ 00:06:10.282 04:22:10 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:10.282 * Looking for test storage... 00:06:10.282 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:10.282 04:22:10 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:10.282 04:22:10 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:06:10.282 04:22:10 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:10.282 04:22:10 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:10.282 04:22:10 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.282 04:22:10 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.282 04:22:10 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.282 04:22:10 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.282 04:22:10 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.282 04:22:10 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.282 04:22:10 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.282 04:22:10 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.282 04:22:10 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.282 04:22:10 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.282 04:22:10 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.282 04:22:10 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:10.282 04:22:10 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:10.282 04:22:10 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.282 04:22:10 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.282 04:22:10 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:10.282 04:22:10 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:10.282 04:22:10 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.282 04:22:10 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:10.282 04:22:10 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.282 04:22:10 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:10.282 04:22:10 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:10.282 04:22:10 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.542 04:22:10 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:10.542 04:22:10 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.542 04:22:10 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.542 04:22:10 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.542 04:22:10 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:10.542 04:22:10 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.542 04:22:10 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:10.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.542 --rc genhtml_branch_coverage=1 00:06:10.542 --rc genhtml_function_coverage=1 00:06:10.542 --rc genhtml_legend=1 00:06:10.542 --rc geninfo_all_blocks=1 00:06:10.542 --rc geninfo_unexecuted_blocks=1 00:06:10.542 00:06:10.542 ' 00:06:10.542 04:22:10 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:10.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.542 --rc genhtml_branch_coverage=1 00:06:10.542 --rc genhtml_function_coverage=1 00:06:10.542 --rc genhtml_legend=1 00:06:10.542 --rc geninfo_all_blocks=1 00:06:10.542 --rc geninfo_unexecuted_blocks=1 00:06:10.542 00:06:10.542 ' 00:06:10.542 04:22:10 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:10.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.542 --rc genhtml_branch_coverage=1 00:06:10.542 --rc genhtml_function_coverage=1 00:06:10.542 --rc genhtml_legend=1 00:06:10.542 --rc geninfo_all_blocks=1 00:06:10.542 --rc geninfo_unexecuted_blocks=1 00:06:10.542 00:06:10.542 ' 00:06:10.542 04:22:10 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:10.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.542 --rc genhtml_branch_coverage=1 00:06:10.542 --rc genhtml_function_coverage=1 00:06:10.542 --rc genhtml_legend=1 00:06:10.542 --rc geninfo_all_blocks=1 00:06:10.542 --rc geninfo_unexecuted_blocks=1 00:06:10.542 00:06:10.542 ' 00:06:10.542 04:22:10 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:10.542 04:22:10 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=71827 00:06:10.542 04:22:10 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:10.542 04:22:10 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:10.542 04:22:10 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 71827 00:06:10.542 04:22:10 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 71827 ']' 00:06:10.542 04:22:10 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.542 04:22:10 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.542 04:22:10 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.542 04:22:10 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.542 04:22:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:10.542 [2024-12-13 04:22:10.383907] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:10.542 [2024-12-13 04:22:10.384019] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71827 ] 00:06:10.542 [2024-12-13 04:22:10.539519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:10.802 [2024-12-13 04:22:10.570333] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.802 [2024-12-13 04:22:10.570595] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.802 [2024-12-13 04:22:10.570736] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.802 [2024-12-13 04:22:10.570787] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:11.372 04:22:11 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.372 04:22:11 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:11.372 04:22:11 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:11.372 04:22:11 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.372 04:22:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:11.372 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:11.372 POWER: Cannot set governor of lcore 0 to userspace 00:06:11.372 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:11.372 POWER: Cannot set governor of lcore 0 to performance 00:06:11.372 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:11.372 POWER: Cannot set governor of lcore 0 to userspace 00:06:11.372 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:11.372 POWER: Unable to set Power Management Environment for lcore 0 00:06:11.372 [2024-12-13 04:22:11.212164] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:06:11.372 [2024-12-13 04:22:11.212219] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:06:11.372 [2024-12-13 04:22:11.212325] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:11.372 [2024-12-13 04:22:11.212425] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:11.372 [2024-12-13 04:22:11.212438] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:11.372 [2024-12-13 04:22:11.212457] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:11.372 04:22:11 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.372 04:22:11 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:11.372 04:22:11 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.372 04:22:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:11.372 [2024-12-13 04:22:11.340871] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:11.372 04:22:11 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.372 04:22:11 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:11.372 04:22:11 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.372 04:22:11 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.372 04:22:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:11.372 ************************************ 00:06:11.372 START TEST scheduler_create_thread 00:06:11.372 ************************************ 00:06:11.372 04:22:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:11.372 04:22:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:11.372 04:22:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.372 04:22:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.372 2 00:06:11.372 04:22:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.372 04:22:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:11.372 04:22:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.372 04:22:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.372 3 00:06:11.372 04:22:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.372 04:22:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:11.632 04:22:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.632 04:22:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.632 4 00:06:11.632 04:22:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.632 04:22:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:11.632 04:22:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.632 04:22:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.632 5 00:06:11.632 04:22:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.632 04:22:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:11.632 04:22:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.632 04:22:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.632 6 00:06:11.632 04:22:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.632 04:22:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:11.632 04:22:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.632 04:22:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.632 7 00:06:11.632 04:22:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.632 04:22:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:11.632 04:22:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.632 04:22:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.632 8 00:06:11.632 04:22:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.632 04:22:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:11.632 04:22:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.632 04:22:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.632 9 00:06:11.632 04:22:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.632 04:22:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:11.632 04:22:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.632 04:22:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.891 10 00:06:11.891 04:22:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.891 04:22:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:11.891 04:22:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.891 04:22:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.271 04:22:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.271 04:22:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:13.271 04:22:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:13.271 04:22:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.271 04:22:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.210 04:22:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.210 04:22:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:14.210 04:22:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.210 04:22:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.148 04:22:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.148 04:22:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:15.148 04:22:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:15.148 04:22:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.148 04:22:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.714 04:22:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.714 00:06:15.714 real 0m4.212s 00:06:15.714 user 0m0.029s 00:06:15.714 sys 0m0.009s 00:06:15.714 04:22:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.714 04:22:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.714 ************************************ 00:06:15.714 END TEST scheduler_create_thread 00:06:15.714 ************************************ 00:06:15.715 04:22:15 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:15.715 04:22:15 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 71827 00:06:15.715 04:22:15 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 71827 ']' 00:06:15.715 04:22:15 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 71827 00:06:15.715 04:22:15 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:15.715 04:22:15 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:15.715 04:22:15 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71827 00:06:15.715 killing process with pid 71827 00:06:15.715 04:22:15 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:15.715 04:22:15 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:15.715 04:22:15 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71827' 00:06:15.715 04:22:15 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 71827 00:06:15.715 04:22:15 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 71827 00:06:15.974 [2024-12-13 04:22:15.945660] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:16.542 00:06:16.542 real 0m6.255s 00:06:16.542 user 0m13.951s 00:06:16.542 sys 0m0.562s 00:06:16.542 04:22:16 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.542 ************************************ 00:06:16.542 END TEST event_scheduler 00:06:16.542 ************************************ 00:06:16.542 04:22:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:16.542 04:22:16 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:16.542 04:22:16 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:16.542 04:22:16 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.542 04:22:16 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.542 04:22:16 event -- common/autotest_common.sh@10 -- # set +x 00:06:16.542 ************************************ 00:06:16.542 START TEST app_repeat 00:06:16.542 ************************************ 00:06:16.542 04:22:16 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:16.542 04:22:16 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.542 04:22:16 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.542 04:22:16 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:16.542 04:22:16 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:16.542 04:22:16 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:16.542 04:22:16 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:16.542 04:22:16 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:16.542 04:22:16 event.app_repeat -- event/event.sh@19 -- # repeat_pid=71939 00:06:16.542 04:22:16 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:16.542 04:22:16 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:16.542 04:22:16 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 71939' 00:06:16.542 04:22:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:16.542 04:22:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:16.542 Process app_repeat pid: 71939 00:06:16.542 spdk_app_start Round 0 00:06:16.542 04:22:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71939 /var/tmp/spdk-nbd.sock 00:06:16.542 04:22:16 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 71939 ']' 00:06:16.542 04:22:16 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:16.542 04:22:16 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:16.542 04:22:16 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:16.542 04:22:16 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.542 04:22:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:16.542 [2024-12-13 04:22:16.463176] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:16.542 [2024-12-13 04:22:16.463321] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71939 ] 00:06:16.801 [2024-12-13 04:22:16.616795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:16.801 [2024-12-13 04:22:16.662897] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.801 [2024-12-13 04:22:16.663012] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.369 04:22:17 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.369 04:22:17 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:17.369 04:22:17 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:17.629 Malloc0 00:06:17.629 04:22:17 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:17.888 Malloc1 00:06:17.888 04:22:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:17.888 04:22:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.888 04:22:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.888 04:22:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:17.888 04:22:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.888 04:22:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:17.888 04:22:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:17.888 04:22:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.888 04:22:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.888 04:22:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:17.888 04:22:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.888 04:22:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:17.888 04:22:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:17.888 04:22:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:17.888 04:22:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.888 04:22:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:18.147 /dev/nbd0 00:06:18.147 04:22:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:18.147 04:22:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:18.147 04:22:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:18.147 04:22:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:18.147 04:22:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.147 04:22:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.147 04:22:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:18.147 04:22:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:18.147 04:22:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.147 04:22:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.147 04:22:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:18.147 1+0 records in 00:06:18.147 1+0 records out 00:06:18.147 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000573449 s, 7.1 MB/s 00:06:18.147 04:22:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:18.147 04:22:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:18.147 04:22:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:18.147 04:22:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.147 04:22:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:18.147 04:22:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.147 04:22:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.147 04:22:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:18.406 /dev/nbd1 00:06:18.406 04:22:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:18.406 04:22:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:18.406 04:22:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:18.406 04:22:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:18.406 04:22:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.406 04:22:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.406 04:22:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:18.406 04:22:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:18.406 04:22:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.406 04:22:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.406 04:22:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:18.406 1+0 records in 00:06:18.406 1+0 records out 00:06:18.406 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198322 s, 20.7 MB/s 00:06:18.406 04:22:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:18.406 04:22:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:18.407 04:22:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:18.665 04:22:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.665 04:22:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:18.665 04:22:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.665 04:22:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.665 04:22:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:18.665 04:22:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.665 04:22:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:18.665 04:22:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:18.665 { 00:06:18.665 "nbd_device": "/dev/nbd0", 00:06:18.665 "bdev_name": "Malloc0" 00:06:18.665 }, 00:06:18.665 { 00:06:18.665 "nbd_device": "/dev/nbd1", 00:06:18.665 "bdev_name": "Malloc1" 00:06:18.665 } 00:06:18.665 ]' 00:06:18.665 04:22:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:18.665 { 00:06:18.665 "nbd_device": "/dev/nbd0", 00:06:18.665 "bdev_name": "Malloc0" 00:06:18.665 }, 00:06:18.665 { 00:06:18.665 "nbd_device": "/dev/nbd1", 00:06:18.665 "bdev_name": "Malloc1" 00:06:18.665 } 00:06:18.665 ]' 00:06:18.665 04:22:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:18.665 04:22:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:18.665 /dev/nbd1' 00:06:18.665 04:22:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:18.925 04:22:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:18.925 /dev/nbd1' 00:06:18.925 04:22:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:18.925 04:22:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:18.925 04:22:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:18.925 04:22:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:18.925 04:22:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:18.925 04:22:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.925 04:22:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:18.925 04:22:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:18.925 04:22:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:18.925 04:22:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:18.925 04:22:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:18.925 256+0 records in 00:06:18.925 256+0 records out 00:06:18.925 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00426982 s, 246 MB/s 00:06:18.925 04:22:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:18.925 04:22:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:18.925 256+0 records in 00:06:18.925 256+0 records out 00:06:18.925 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0252879 s, 41.5 MB/s 00:06:18.925 04:22:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:18.925 04:22:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:18.925 256+0 records in 00:06:18.925 256+0 records out 00:06:18.925 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0274848 s, 38.2 MB/s 00:06:18.925 04:22:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:18.925 04:22:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.925 04:22:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:18.925 04:22:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:18.925 04:22:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:18.925 04:22:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:18.925 04:22:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:18.925 04:22:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:18.925 04:22:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:18.925 04:22:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:18.925 04:22:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:18.925 04:22:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:18.925 04:22:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:18.925 04:22:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.925 04:22:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.925 04:22:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:18.925 04:22:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:18.925 04:22:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.925 04:22:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:19.190 04:22:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:19.190 04:22:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:19.190 04:22:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:19.190 04:22:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.190 04:22:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.190 04:22:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:19.190 04:22:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:19.190 04:22:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.190 04:22:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.190 04:22:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:19.190 04:22:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:19.190 04:22:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:19.190 04:22:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:19.190 04:22:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.190 04:22:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.190 04:22:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:19.468 04:22:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:19.468 04:22:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.468 04:22:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.468 04:22:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.468 04:22:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.468 04:22:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:19.468 04:22:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:19.468 04:22:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.468 04:22:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:19.468 04:22:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:19.468 04:22:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.468 04:22:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:19.468 04:22:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:19.468 04:22:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:19.468 04:22:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:19.468 04:22:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:19.468 04:22:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:19.468 04:22:19 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:19.741 04:22:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:20.000 [2024-12-13 04:22:19.840342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:20.000 [2024-12-13 04:22:19.863991] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.000 [2024-12-13 04:22:19.863992] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.000 [2024-12-13 04:22:19.907112] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:20.000 [2024-12-13 04:22:19.907182] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:23.291 spdk_app_start Round 1 00:06:23.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:23.291 04:22:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:23.291 04:22:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:23.291 04:22:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71939 /var/tmp/spdk-nbd.sock 00:06:23.291 04:22:22 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 71939 ']' 00:06:23.291 04:22:22 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:23.291 04:22:22 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.291 04:22:22 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:23.291 04:22:22 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.291 04:22:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:23.291 04:22:22 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.291 04:22:22 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:23.291 04:22:22 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:23.291 Malloc0 00:06:23.291 04:22:23 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:23.291 Malloc1 00:06:23.291 04:22:23 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:23.291 04:22:23 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.291 04:22:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:23.291 04:22:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:23.291 04:22:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.291 04:22:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:23.291 04:22:23 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:23.291 04:22:23 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.291 04:22:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:23.291 04:22:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:23.291 04:22:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.291 04:22:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:23.291 04:22:23 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:23.291 04:22:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:23.291 04:22:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.291 04:22:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:23.551 /dev/nbd0 00:06:23.551 04:22:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:23.551 04:22:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:23.551 04:22:23 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:23.551 04:22:23 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:23.551 04:22:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:23.551 04:22:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:23.551 04:22:23 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:23.551 04:22:23 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:23.551 04:22:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:23.551 04:22:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:23.551 04:22:23 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:23.551 1+0 records in 00:06:23.551 1+0 records out 00:06:23.551 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238217 s, 17.2 MB/s 00:06:23.551 04:22:23 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:23.551 04:22:23 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:23.551 04:22:23 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:23.551 04:22:23 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:23.551 04:22:23 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:23.551 04:22:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:23.551 04:22:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.551 04:22:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:23.810 /dev/nbd1 00:06:23.810 04:22:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:23.810 04:22:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:23.810 04:22:23 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:23.810 04:22:23 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:23.810 04:22:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:23.810 04:22:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:23.810 04:22:23 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:23.810 04:22:23 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:23.810 04:22:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:23.810 04:22:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:23.810 04:22:23 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:23.810 1+0 records in 00:06:23.810 1+0 records out 00:06:23.810 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349792 s, 11.7 MB/s 00:06:23.810 04:22:23 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:23.810 04:22:23 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:23.810 04:22:23 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:23.810 04:22:23 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:23.810 04:22:23 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:23.810 04:22:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:23.810 04:22:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.810 04:22:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:23.810 04:22:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.810 04:22:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:24.070 04:22:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:24.070 { 00:06:24.070 "nbd_device": "/dev/nbd0", 00:06:24.070 "bdev_name": "Malloc0" 00:06:24.070 }, 00:06:24.070 { 00:06:24.070 "nbd_device": "/dev/nbd1", 00:06:24.070 "bdev_name": "Malloc1" 00:06:24.070 } 00:06:24.070 ]' 00:06:24.070 04:22:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:24.070 { 00:06:24.070 "nbd_device": "/dev/nbd0", 00:06:24.070 "bdev_name": "Malloc0" 00:06:24.070 }, 00:06:24.070 { 00:06:24.070 "nbd_device": "/dev/nbd1", 00:06:24.070 "bdev_name": "Malloc1" 00:06:24.070 } 00:06:24.070 ]' 00:06:24.070 04:22:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:24.070 04:22:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:24.070 /dev/nbd1' 00:06:24.070 04:22:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:24.070 /dev/nbd1' 00:06:24.070 04:22:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:24.070 04:22:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:24.070 04:22:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:24.070 04:22:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:24.070 04:22:24 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:24.070 04:22:24 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:24.070 04:22:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.070 04:22:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:24.070 04:22:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:24.070 04:22:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:24.070 04:22:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:24.070 04:22:24 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:24.070 256+0 records in 00:06:24.070 256+0 records out 00:06:24.070 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00793367 s, 132 MB/s 00:06:24.070 04:22:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:24.070 04:22:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:24.330 256+0 records in 00:06:24.330 256+0 records out 00:06:24.330 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0262228 s, 40.0 MB/s 00:06:24.330 04:22:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:24.330 04:22:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:24.330 256+0 records in 00:06:24.330 256+0 records out 00:06:24.330 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0249361 s, 42.1 MB/s 00:06:24.330 04:22:24 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:24.330 04:22:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.330 04:22:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:24.330 04:22:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:24.330 04:22:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:24.330 04:22:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:24.330 04:22:24 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:24.330 04:22:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.330 04:22:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:24.330 04:22:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.330 04:22:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:24.330 04:22:24 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:24.330 04:22:24 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:24.330 04:22:24 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.330 04:22:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.330 04:22:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:24.330 04:22:24 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:24.330 04:22:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.330 04:22:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:24.330 04:22:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:24.590 04:22:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:24.590 04:22:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:24.590 04:22:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.590 04:22:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.590 04:22:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:24.590 04:22:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:24.590 04:22:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.590 04:22:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.590 04:22:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:24.590 04:22:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:24.590 04:22:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:24.590 04:22:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:24.590 04:22:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.590 04:22:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.590 04:22:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:24.590 04:22:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:24.590 04:22:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.590 04:22:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:24.590 04:22:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.590 04:22:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:24.850 04:22:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:24.850 04:22:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:24.850 04:22:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:24.850 04:22:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:24.850 04:22:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:24.850 04:22:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:24.850 04:22:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:24.850 04:22:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:24.850 04:22:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:24.850 04:22:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:24.850 04:22:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:24.850 04:22:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:24.850 04:22:24 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:25.109 04:22:25 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:25.367 [2024-12-13 04:22:25.155028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:25.367 [2024-12-13 04:22:25.180287] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.367 [2024-12-13 04:22:25.180328] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.367 [2024-12-13 04:22:25.222609] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:25.367 [2024-12-13 04:22:25.222667] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:28.657 spdk_app_start Round 2 00:06:28.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:28.657 04:22:28 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:28.657 04:22:28 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:28.657 04:22:28 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71939 /var/tmp/spdk-nbd.sock 00:06:28.657 04:22:28 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 71939 ']' 00:06:28.657 04:22:28 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:28.657 04:22:28 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.657 04:22:28 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:28.657 04:22:28 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.657 04:22:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:28.657 04:22:28 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.657 04:22:28 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:28.657 04:22:28 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:28.657 Malloc0 00:06:28.657 04:22:28 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:28.657 Malloc1 00:06:28.657 04:22:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:28.657 04:22:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.657 04:22:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:28.657 04:22:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:28.657 04:22:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.657 04:22:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:28.657 04:22:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:28.657 04:22:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.657 04:22:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:28.657 04:22:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:28.657 04:22:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.657 04:22:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:28.657 04:22:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:28.657 04:22:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:28.657 04:22:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.657 04:22:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:28.917 /dev/nbd0 00:06:28.917 04:22:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:28.917 04:22:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:28.917 04:22:28 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:28.917 04:22:28 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:28.917 04:22:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:28.917 04:22:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:28.917 04:22:28 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:28.917 04:22:28 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:28.917 04:22:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:28.917 04:22:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:28.917 04:22:28 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:28.917 1+0 records in 00:06:28.917 1+0 records out 00:06:28.917 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224494 s, 18.2 MB/s 00:06:28.917 04:22:28 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:28.917 04:22:28 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:28.918 04:22:28 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:28.918 04:22:28 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:28.918 04:22:28 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:28.918 04:22:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.918 04:22:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.918 04:22:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:29.178 /dev/nbd1 00:06:29.178 04:22:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:29.178 04:22:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:29.178 04:22:29 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:29.178 04:22:29 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:29.178 04:22:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:29.178 04:22:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:29.178 04:22:29 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:29.178 04:22:29 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:29.178 04:22:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:29.178 04:22:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:29.178 04:22:29 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:29.178 1+0 records in 00:06:29.178 1+0 records out 00:06:29.178 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285405 s, 14.4 MB/s 00:06:29.178 04:22:29 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:29.178 04:22:29 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:29.178 04:22:29 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:29.178 04:22:29 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:29.178 04:22:29 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:29.178 04:22:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:29.178 04:22:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.178 04:22:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:29.178 04:22:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.178 04:22:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:29.437 04:22:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:29.437 { 00:06:29.437 "nbd_device": "/dev/nbd0", 00:06:29.437 "bdev_name": "Malloc0" 00:06:29.437 }, 00:06:29.437 { 00:06:29.437 "nbd_device": "/dev/nbd1", 00:06:29.437 "bdev_name": "Malloc1" 00:06:29.437 } 00:06:29.437 ]' 00:06:29.437 04:22:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:29.437 { 00:06:29.437 "nbd_device": "/dev/nbd0", 00:06:29.437 "bdev_name": "Malloc0" 00:06:29.437 }, 00:06:29.437 { 00:06:29.437 "nbd_device": "/dev/nbd1", 00:06:29.437 "bdev_name": "Malloc1" 00:06:29.437 } 00:06:29.437 ]' 00:06:29.437 04:22:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:29.437 04:22:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:29.437 /dev/nbd1' 00:06:29.437 04:22:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:29.437 /dev/nbd1' 00:06:29.437 04:22:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:29.437 04:22:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:29.437 04:22:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:29.437 04:22:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:29.437 04:22:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:29.437 04:22:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:29.438 04:22:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.438 04:22:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:29.438 04:22:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:29.438 04:22:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:29.438 04:22:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:29.438 04:22:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:29.438 256+0 records in 00:06:29.438 256+0 records out 00:06:29.438 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00525394 s, 200 MB/s 00:06:29.438 04:22:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:29.438 04:22:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:29.438 256+0 records in 00:06:29.438 256+0 records out 00:06:29.438 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0255646 s, 41.0 MB/s 00:06:29.438 04:22:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:29.438 04:22:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:29.438 256+0 records in 00:06:29.438 256+0 records out 00:06:29.438 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0215672 s, 48.6 MB/s 00:06:29.438 04:22:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:29.438 04:22:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.438 04:22:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:29.438 04:22:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:29.438 04:22:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:29.438 04:22:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:29.438 04:22:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:29.438 04:22:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:29.438 04:22:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:29.438 04:22:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:29.438 04:22:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:29.438 04:22:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:29.438 04:22:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:29.438 04:22:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.438 04:22:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.438 04:22:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:29.438 04:22:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:29.438 04:22:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.438 04:22:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:29.697 04:22:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:29.697 04:22:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:29.697 04:22:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:29.697 04:22:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.697 04:22:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.697 04:22:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:29.697 04:22:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:29.697 04:22:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.697 04:22:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.697 04:22:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:29.957 04:22:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:29.957 04:22:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:29.957 04:22:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:29.957 04:22:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.957 04:22:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.957 04:22:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:29.957 04:22:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:29.957 04:22:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.957 04:22:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:29.957 04:22:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.957 04:22:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:30.216 04:22:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:30.216 04:22:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:30.216 04:22:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:30.216 04:22:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:30.216 04:22:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:30.216 04:22:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:30.216 04:22:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:30.216 04:22:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:30.216 04:22:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:30.216 04:22:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:30.216 04:22:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:30.216 04:22:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:30.216 04:22:30 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:30.476 04:22:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:30.476 [2024-12-13 04:22:30.420424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:30.476 [2024-12-13 04:22:30.445049] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.476 [2024-12-13 04:22:30.445050] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.476 [2024-12-13 04:22:30.487276] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:30.476 [2024-12-13 04:22:30.487431] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:33.767 04:22:33 event.app_repeat -- event/event.sh@38 -- # waitforlisten 71939 /var/tmp/spdk-nbd.sock 00:06:33.767 04:22:33 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 71939 ']' 00:06:33.767 04:22:33 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:33.767 04:22:33 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.767 04:22:33 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:33.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:33.767 04:22:33 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.767 04:22:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:33.767 04:22:33 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.767 04:22:33 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:33.767 04:22:33 event.app_repeat -- event/event.sh@39 -- # killprocess 71939 00:06:33.767 04:22:33 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 71939 ']' 00:06:33.767 04:22:33 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 71939 00:06:33.767 04:22:33 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:33.767 04:22:33 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.767 04:22:33 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71939 00:06:33.767 04:22:33 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:33.767 04:22:33 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:33.767 04:22:33 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71939' 00:06:33.767 killing process with pid 71939 00:06:33.767 04:22:33 event.app_repeat -- common/autotest_common.sh@973 -- # kill 71939 00:06:33.767 04:22:33 event.app_repeat -- common/autotest_common.sh@978 -- # wait 71939 00:06:33.767 spdk_app_start is called in Round 0. 00:06:33.767 Shutdown signal received, stop current app iteration 00:06:33.767 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 reinitialization... 00:06:33.767 spdk_app_start is called in Round 1. 00:06:33.767 Shutdown signal received, stop current app iteration 00:06:33.767 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 reinitialization... 00:06:33.767 spdk_app_start is called in Round 2. 00:06:33.767 Shutdown signal received, stop current app iteration 00:06:33.767 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 reinitialization... 00:06:33.767 spdk_app_start is called in Round 3. 00:06:33.767 Shutdown signal received, stop current app iteration 00:06:33.767 04:22:33 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:33.767 04:22:33 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:33.767 00:06:33.767 real 0m17.306s 00:06:33.767 user 0m38.372s 00:06:33.767 sys 0m2.418s 00:06:33.767 04:22:33 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.767 04:22:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:33.767 ************************************ 00:06:33.767 END TEST app_repeat 00:06:33.767 ************************************ 00:06:33.767 04:22:33 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:33.767 04:22:33 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:33.767 04:22:33 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:33.767 04:22:33 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.767 04:22:33 event -- common/autotest_common.sh@10 -- # set +x 00:06:33.767 ************************************ 00:06:33.767 START TEST cpu_locks 00:06:33.767 ************************************ 00:06:33.767 04:22:33 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:34.027 * Looking for test storage... 00:06:34.027 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:34.027 04:22:33 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:34.027 04:22:33 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:06:34.027 04:22:33 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:34.027 04:22:33 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:34.027 04:22:33 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:34.027 04:22:33 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:34.027 04:22:33 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:34.027 04:22:33 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:34.027 04:22:33 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:34.027 04:22:33 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:34.027 04:22:33 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:34.027 04:22:33 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:34.027 04:22:33 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:34.027 04:22:33 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:34.027 04:22:33 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:34.027 04:22:33 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:34.027 04:22:33 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:34.027 04:22:33 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:34.027 04:22:33 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:34.027 04:22:33 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:34.027 04:22:33 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:34.027 04:22:33 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:34.027 04:22:33 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:34.027 04:22:33 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:34.027 04:22:33 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:34.027 04:22:33 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:34.027 04:22:33 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:34.027 04:22:33 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:34.027 04:22:33 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:34.027 04:22:33 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:34.027 04:22:33 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:34.027 04:22:33 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:34.027 04:22:33 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:34.028 04:22:33 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:34.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.028 --rc genhtml_branch_coverage=1 00:06:34.028 --rc genhtml_function_coverage=1 00:06:34.028 --rc genhtml_legend=1 00:06:34.028 --rc geninfo_all_blocks=1 00:06:34.028 --rc geninfo_unexecuted_blocks=1 00:06:34.028 00:06:34.028 ' 00:06:34.028 04:22:33 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:34.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.028 --rc genhtml_branch_coverage=1 00:06:34.028 --rc genhtml_function_coverage=1 00:06:34.028 --rc genhtml_legend=1 00:06:34.028 --rc geninfo_all_blocks=1 00:06:34.028 --rc geninfo_unexecuted_blocks=1 00:06:34.028 00:06:34.028 ' 00:06:34.028 04:22:33 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:34.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.028 --rc genhtml_branch_coverage=1 00:06:34.028 --rc genhtml_function_coverage=1 00:06:34.028 --rc genhtml_legend=1 00:06:34.028 --rc geninfo_all_blocks=1 00:06:34.028 --rc geninfo_unexecuted_blocks=1 00:06:34.028 00:06:34.028 ' 00:06:34.028 04:22:33 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:34.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.028 --rc genhtml_branch_coverage=1 00:06:34.028 --rc genhtml_function_coverage=1 00:06:34.028 --rc genhtml_legend=1 00:06:34.028 --rc geninfo_all_blocks=1 00:06:34.028 --rc geninfo_unexecuted_blocks=1 00:06:34.028 00:06:34.028 ' 00:06:34.028 04:22:33 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:34.028 04:22:33 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:34.028 04:22:34 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:34.028 04:22:34 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:34.028 04:22:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:34.028 04:22:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.028 04:22:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.028 ************************************ 00:06:34.028 START TEST default_locks 00:06:34.028 ************************************ 00:06:34.028 04:22:34 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:34.028 04:22:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=72371 00:06:34.028 04:22:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 72371 00:06:34.028 04:22:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:34.028 04:22:34 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 72371 ']' 00:06:34.028 04:22:34 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.028 04:22:34 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.028 04:22:34 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.028 04:22:34 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.028 04:22:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.287 [2024-12-13 04:22:34.115222] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:34.287 [2024-12-13 04:22:34.115464] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72371 ] 00:06:34.287 [2024-12-13 04:22:34.268838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.287 [2024-12-13 04:22:34.294528] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.228 04:22:34 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.228 04:22:34 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:35.228 04:22:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 72371 00:06:35.228 04:22:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:35.228 04:22:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 72371 00:06:35.228 04:22:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 72371 00:06:35.228 04:22:35 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 72371 ']' 00:06:35.228 04:22:35 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 72371 00:06:35.228 04:22:35 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:35.486 04:22:35 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:35.486 04:22:35 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72371 00:06:35.486 killing process with pid 72371 00:06:35.486 04:22:35 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:35.486 04:22:35 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:35.486 04:22:35 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72371' 00:06:35.486 04:22:35 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 72371 00:06:35.486 04:22:35 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 72371 00:06:35.746 04:22:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 72371 00:06:35.746 04:22:35 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:35.746 04:22:35 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 72371 00:06:35.746 04:22:35 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:35.746 04:22:35 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:35.746 04:22:35 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:35.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.746 ERROR: process (pid: 72371) is no longer running 00:06:35.746 04:22:35 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:35.746 04:22:35 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 72371 00:06:35.746 04:22:35 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 72371 ']' 00:06:35.746 04:22:35 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.746 04:22:35 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.746 04:22:35 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.746 04:22:35 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.746 04:22:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.746 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (72371) - No such process 00:06:35.746 04:22:35 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.746 04:22:35 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:35.746 04:22:35 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:35.746 04:22:35 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:35.746 04:22:35 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:35.746 04:22:35 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:35.746 04:22:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:35.746 04:22:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:35.746 04:22:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:35.746 04:22:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:35.746 00:06:35.746 real 0m1.637s 00:06:35.746 user 0m1.607s 00:06:35.746 sys 0m0.549s 00:06:35.746 04:22:35 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.746 04:22:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.746 ************************************ 00:06:35.746 END TEST default_locks 00:06:35.746 ************************************ 00:06:35.746 04:22:35 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:35.746 04:22:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.746 04:22:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.746 04:22:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.746 ************************************ 00:06:35.746 START TEST default_locks_via_rpc 00:06:35.746 ************************************ 00:06:35.746 04:22:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:35.746 04:22:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=72418 00:06:35.746 04:22:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:35.746 04:22:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 72418 00:06:35.746 04:22:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72418 ']' 00:06:35.746 04:22:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.746 04:22:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.746 04:22:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.746 04:22:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.746 04:22:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.006 [2024-12-13 04:22:35.816176] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:36.006 [2024-12-13 04:22:35.816390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72418 ] 00:06:36.006 [2024-12-13 04:22:35.959528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.006 [2024-12-13 04:22:35.985776] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.944 04:22:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.944 04:22:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:36.944 04:22:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:36.944 04:22:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.944 04:22:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.944 04:22:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.944 04:22:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:36.944 04:22:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:36.944 04:22:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:36.944 04:22:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:36.944 04:22:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:36.944 04:22:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.944 04:22:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.944 04:22:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.944 04:22:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 72418 00:06:36.944 04:22:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 72418 00:06:36.944 04:22:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:36.944 04:22:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 72418 00:06:36.944 04:22:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 72418 ']' 00:06:36.944 04:22:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 72418 00:06:36.944 04:22:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:36.944 04:22:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:36.944 04:22:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72418 00:06:36.944 killing process with pid 72418 00:06:36.944 04:22:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:36.944 04:22:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:36.944 04:22:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72418' 00:06:36.944 04:22:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 72418 00:06:36.944 04:22:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 72418 00:06:37.513 ************************************ 00:06:37.513 END TEST default_locks_via_rpc 00:06:37.513 ************************************ 00:06:37.513 00:06:37.513 real 0m1.758s 00:06:37.513 user 0m1.734s 00:06:37.513 sys 0m0.487s 00:06:37.513 04:22:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.513 04:22:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.772 04:22:37 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:37.772 04:22:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:37.772 04:22:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.772 04:22:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.772 ************************************ 00:06:37.772 START TEST non_locking_app_on_locked_coremask 00:06:37.772 ************************************ 00:06:37.772 04:22:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:37.772 04:22:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=72466 00:06:37.773 04:22:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:37.773 04:22:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 72466 /var/tmp/spdk.sock 00:06:37.773 04:22:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72466 ']' 00:06:37.773 04:22:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.773 04:22:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.773 04:22:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.773 04:22:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.773 04:22:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.773 [2024-12-13 04:22:37.649421] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:37.773 [2024-12-13 04:22:37.650155] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72466 ] 00:06:38.032 [2024-12-13 04:22:37.817225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.032 [2024-12-13 04:22:37.859041] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.599 04:22:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.599 04:22:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:38.599 04:22:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:38.599 04:22:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=72482 00:06:38.599 04:22:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 72482 /var/tmp/spdk2.sock 00:06:38.599 04:22:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72482 ']' 00:06:38.599 04:22:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:38.599 04:22:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:38.599 04:22:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:38.599 04:22:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.599 04:22:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.599 [2024-12-13 04:22:38.518953] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:38.600 [2024-12-13 04:22:38.519172] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72482 ] 00:06:38.859 [2024-12-13 04:22:38.669971] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:38.859 [2024-12-13 04:22:38.670047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.859 [2024-12-13 04:22:38.756683] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.428 04:22:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.428 04:22:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:39.428 04:22:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 72466 00:06:39.428 04:22:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72466 00:06:39.428 04:22:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:39.997 04:22:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 72466 00:06:39.997 04:22:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72466 ']' 00:06:39.997 04:22:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72466 00:06:39.997 04:22:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:39.997 04:22:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.997 04:22:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72466 00:06:40.256 04:22:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:40.256 04:22:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:40.256 killing process with pid 72466 00:06:40.256 04:22:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72466' 00:06:40.256 04:22:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72466 00:06:40.256 04:22:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72466 00:06:41.194 04:22:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 72482 00:06:41.194 04:22:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72482 ']' 00:06:41.194 04:22:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72482 00:06:41.194 04:22:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:41.453 04:22:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.453 04:22:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72482 00:06:41.453 04:22:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:41.453 killing process with pid 72482 00:06:41.453 04:22:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:41.453 04:22:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72482' 00:06:41.453 04:22:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72482 00:06:41.453 04:22:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72482 00:06:42.020 ************************************ 00:06:42.020 END TEST non_locking_app_on_locked_coremask 00:06:42.020 ************************************ 00:06:42.020 00:06:42.020 real 0m4.309s 00:06:42.020 user 0m4.163s 00:06:42.020 sys 0m1.377s 00:06:42.020 04:22:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.020 04:22:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.020 04:22:41 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:42.020 04:22:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:42.020 04:22:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.020 04:22:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.020 ************************************ 00:06:42.020 START TEST locking_app_on_unlocked_coremask 00:06:42.020 ************************************ 00:06:42.020 04:22:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:42.020 04:22:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=72551 00:06:42.020 04:22:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:42.020 04:22:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 72551 /var/tmp/spdk.sock 00:06:42.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.020 04:22:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72551 ']' 00:06:42.020 04:22:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.020 04:22:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.020 04:22:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.020 04:22:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.020 04:22:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.020 [2024-12-13 04:22:42.029567] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:42.020 [2024-12-13 04:22:42.029805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72551 ] 00:06:42.280 [2024-12-13 04:22:42.189152] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:42.280 [2024-12-13 04:22:42.189278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.280 [2024-12-13 04:22:42.227699] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.849 04:22:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.849 04:22:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:42.849 04:22:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=72567 00:06:42.849 04:22:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 72567 /var/tmp/spdk2.sock 00:06:42.849 04:22:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:42.849 04:22:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72567 ']' 00:06:42.849 04:22:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:42.849 04:22:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.849 04:22:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:42.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:42.849 04:22:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.849 04:22:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.107 [2024-12-13 04:22:42.924326] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:43.107 [2024-12-13 04:22:42.924578] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72567 ] 00:06:43.107 [2024-12-13 04:22:43.075835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.366 [2024-12-13 04:22:43.164705] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.936 04:22:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.936 04:22:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:43.936 04:22:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 72567 00:06:43.936 04:22:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72567 00:06:43.936 04:22:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:44.873 04:22:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 72551 00:06:44.873 04:22:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72551 ']' 00:06:44.873 04:22:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 72551 00:06:44.873 04:22:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:44.873 04:22:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:44.873 04:22:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72551 00:06:44.873 04:22:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:44.873 killing process with pid 72551 00:06:44.873 04:22:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:44.873 04:22:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72551' 00:06:44.873 04:22:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 72551 00:06:44.873 04:22:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 72551 00:06:46.252 04:22:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 72567 00:06:46.252 04:22:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72567 ']' 00:06:46.252 04:22:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 72567 00:06:46.252 04:22:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:46.252 04:22:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:46.252 04:22:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72567 00:06:46.252 killing process with pid 72567 00:06:46.252 04:22:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:46.252 04:22:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:46.252 04:22:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72567' 00:06:46.252 04:22:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 72567 00:06:46.252 04:22:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 72567 00:06:46.511 00:06:46.511 real 0m4.558s 00:06:46.511 user 0m4.450s 00:06:46.511 sys 0m1.457s 00:06:46.511 04:22:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.511 04:22:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.511 ************************************ 00:06:46.511 END TEST locking_app_on_unlocked_coremask 00:06:46.511 ************************************ 00:06:46.770 04:22:46 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:46.770 04:22:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.770 04:22:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.770 04:22:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:46.770 ************************************ 00:06:46.770 START TEST locking_app_on_locked_coremask 00:06:46.770 ************************************ 00:06:46.770 04:22:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:46.770 04:22:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=72647 00:06:46.770 04:22:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:46.770 04:22:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 72647 /var/tmp/spdk.sock 00:06:46.770 04:22:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72647 ']' 00:06:46.770 04:22:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.770 04:22:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.770 04:22:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.770 04:22:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.770 04:22:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.770 [2024-12-13 04:22:46.645835] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:46.770 [2024-12-13 04:22:46.646073] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72647 ] 00:06:47.028 [2024-12-13 04:22:46.801861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.028 [2024-12-13 04:22:46.840039] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.597 04:22:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.597 04:22:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:47.597 04:22:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:47.597 04:22:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=72663 00:06:47.597 04:22:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 72663 /var/tmp/spdk2.sock 00:06:47.597 04:22:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:47.597 04:22:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 72663 /var/tmp/spdk2.sock 00:06:47.597 04:22:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:47.597 04:22:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:47.597 04:22:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:47.597 04:22:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:47.597 04:22:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 72663 /var/tmp/spdk2.sock 00:06:47.597 04:22:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72663 ']' 00:06:47.597 04:22:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:47.597 04:22:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.597 04:22:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:47.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:47.597 04:22:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.597 04:22:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.597 [2024-12-13 04:22:47.525753] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:47.597 [2024-12-13 04:22:47.525951] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72663 ] 00:06:47.856 [2024-12-13 04:22:47.674518] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 72647 has claimed it. 00:06:47.856 [2024-12-13 04:22:47.674582] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:48.424 ERROR: process (pid: 72663) is no longer running 00:06:48.425 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (72663) - No such process 00:06:48.425 04:22:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.425 04:22:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:48.425 04:22:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:48.425 04:22:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:48.425 04:22:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:48.425 04:22:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:48.425 04:22:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 72647 00:06:48.425 04:22:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72647 00:06:48.425 04:22:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:48.425 04:22:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 72647 00:06:48.425 04:22:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72647 ']' 00:06:48.425 04:22:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72647 00:06:48.425 04:22:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:48.425 04:22:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:48.425 04:22:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72647 00:06:48.684 04:22:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:48.684 04:22:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:48.684 04:22:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72647' 00:06:48.684 killing process with pid 72647 00:06:48.684 04:22:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72647 00:06:48.684 04:22:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72647 00:06:49.252 00:06:49.252 real 0m2.508s 00:06:49.252 user 0m2.540s 00:06:49.252 sys 0m0.766s 00:06:49.252 04:22:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.252 04:22:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.252 ************************************ 00:06:49.252 END TEST locking_app_on_locked_coremask 00:06:49.252 ************************************ 00:06:49.252 04:22:49 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:49.252 04:22:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:49.252 04:22:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.252 04:22:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.252 ************************************ 00:06:49.252 START TEST locking_overlapped_coremask 00:06:49.252 ************************************ 00:06:49.252 04:22:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:49.252 04:22:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=72705 00:06:49.252 04:22:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:49.252 04:22:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 72705 /var/tmp/spdk.sock 00:06:49.252 04:22:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 72705 ']' 00:06:49.252 04:22:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.252 04:22:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.252 04:22:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.252 04:22:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.252 04:22:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.252 [2024-12-13 04:22:49.227020] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:49.252 [2024-12-13 04:22:49.227226] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72705 ] 00:06:49.512 [2024-12-13 04:22:49.376980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:49.512 [2024-12-13 04:22:49.421865] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.512 [2024-12-13 04:22:49.422102] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.512 [2024-12-13 04:22:49.421990] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.085 04:22:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.085 04:22:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:50.085 04:22:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=72723 00:06:50.086 04:22:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:50.086 04:22:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 72723 /var/tmp/spdk2.sock 00:06:50.086 04:22:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:50.086 04:22:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 72723 /var/tmp/spdk2.sock 00:06:50.086 04:22:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:50.086 04:22:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.086 04:22:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:50.086 04:22:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.086 04:22:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 72723 /var/tmp/spdk2.sock 00:06:50.086 04:22:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 72723 ']' 00:06:50.086 04:22:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:50.086 04:22:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.086 04:22:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:50.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:50.086 04:22:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.086 04:22:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.355 [2024-12-13 04:22:50.145226] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:50.355 [2024-12-13 04:22:50.145451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72723 ] 00:06:50.355 [2024-12-13 04:22:50.299467] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72705 has claimed it. 00:06:50.355 [2024-12-13 04:22:50.299555] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:50.930 ERROR: process (pid: 72723) is no longer running 00:06:50.930 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (72723) - No such process 00:06:50.930 04:22:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.930 04:22:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:50.930 04:22:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:50.930 04:22:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:50.930 04:22:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:50.930 04:22:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:50.930 04:22:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:50.930 04:22:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:50.930 04:22:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:50.930 04:22:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:50.930 04:22:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 72705 00:06:50.930 04:22:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 72705 ']' 00:06:50.930 04:22:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 72705 00:06:50.930 04:22:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:50.930 04:22:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:50.930 04:22:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72705 00:06:50.930 04:22:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:50.930 04:22:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:50.930 04:22:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72705' 00:06:50.930 killing process with pid 72705 00:06:50.930 04:22:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 72705 00:06:50.930 04:22:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 72705 00:06:51.501 00:06:51.501 real 0m2.286s 00:06:51.501 user 0m6.006s 00:06:51.501 sys 0m0.647s 00:06:51.501 04:22:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.501 04:22:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.501 ************************************ 00:06:51.501 END TEST locking_overlapped_coremask 00:06:51.501 ************************************ 00:06:51.501 04:22:51 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:51.501 04:22:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.501 04:22:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.501 04:22:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.501 ************************************ 00:06:51.501 START TEST locking_overlapped_coremask_via_rpc 00:06:51.501 ************************************ 00:06:51.501 04:22:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:51.501 04:22:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=72771 00:06:51.501 04:22:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:51.501 04:22:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 72771 /var/tmp/spdk.sock 00:06:51.501 04:22:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72771 ']' 00:06:51.501 04:22:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.501 04:22:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.501 04:22:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.501 04:22:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.501 04:22:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.763 [2024-12-13 04:22:51.586665] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:51.763 [2024-12-13 04:22:51.586872] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72771 ] 00:06:51.763 [2024-12-13 04:22:51.744684] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:51.763 [2024-12-13 04:22:51.744808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:52.022 [2024-12-13 04:22:51.786536] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.022 [2024-12-13 04:22:51.786486] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.022 [2024-12-13 04:22:51.786679] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.590 04:22:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.590 04:22:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:52.590 04:22:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:52.590 04:22:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=72785 00:06:52.590 04:22:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 72785 /var/tmp/spdk2.sock 00:06:52.590 04:22:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72785 ']' 00:06:52.590 04:22:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:52.590 04:22:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.590 04:22:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:52.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:52.590 04:22:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.590 04:22:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.590 [2024-12-13 04:22:52.464154] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:52.590 [2024-12-13 04:22:52.464365] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72785 ] 00:06:52.850 [2024-12-13 04:22:52.614906] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:52.850 [2024-12-13 04:22:52.614985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:52.850 [2024-12-13 04:22:52.709162] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:52.850 [2024-12-13 04:22:52.712673] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.850 [2024-12-13 04:22:52.712798] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:06:53.418 04:22:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.418 04:22:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:53.418 04:22:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:53.418 04:22:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.418 04:22:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.418 04:22:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.418 04:22:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:53.418 04:22:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:53.418 04:22:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:53.418 04:22:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:53.418 04:22:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:53.418 04:22:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:53.418 04:22:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:53.418 04:22:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:53.419 04:22:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.419 04:22:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.419 [2024-12-13 04:22:53.405631] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72771 has claimed it. 00:06:53.419 request: 00:06:53.419 { 00:06:53.419 "method": "framework_enable_cpumask_locks", 00:06:53.419 "req_id": 1 00:06:53.419 } 00:06:53.419 Got JSON-RPC error response 00:06:53.419 response: 00:06:53.419 { 00:06:53.419 "code": -32603, 00:06:53.419 "message": "Failed to claim CPU core: 2" 00:06:53.419 } 00:06:53.419 04:22:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:53.419 04:22:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:53.419 04:22:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:53.419 04:22:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:53.419 04:22:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:53.419 04:22:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 72771 /var/tmp/spdk.sock 00:06:53.419 04:22:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72771 ']' 00:06:53.419 04:22:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.419 04:22:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.419 04:22:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.419 04:22:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.419 04:22:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.678 04:22:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.678 04:22:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:53.678 04:22:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 72785 /var/tmp/spdk2.sock 00:06:53.678 04:22:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72785 ']' 00:06:53.678 04:22:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:53.679 04:22:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.679 04:22:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:53.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:53.679 04:22:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.679 04:22:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.938 04:22:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.938 04:22:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:53.938 04:22:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:53.938 04:22:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:53.938 04:22:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:53.938 04:22:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:53.938 00:06:53.938 real 0m2.348s 00:06:53.938 user 0m1.037s 00:06:53.938 sys 0m0.182s 00:06:53.938 04:22:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.938 04:22:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.938 ************************************ 00:06:53.938 END TEST locking_overlapped_coremask_via_rpc 00:06:53.938 ************************************ 00:06:53.938 04:22:53 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:53.938 04:22:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72771 ]] 00:06:53.938 04:22:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72771 00:06:53.938 04:22:53 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72771 ']' 00:06:53.938 04:22:53 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72771 00:06:53.938 04:22:53 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:53.938 04:22:53 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:53.938 04:22:53 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72771 00:06:53.938 04:22:53 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:53.938 killing process with pid 72771 00:06:53.938 04:22:53 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:53.938 04:22:53 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72771' 00:06:53.938 04:22:53 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 72771 00:06:53.938 04:22:53 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 72771 00:06:54.877 04:22:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72785 ]] 00:06:54.877 04:22:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72785 00:06:54.877 04:22:54 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72785 ']' 00:06:54.877 04:22:54 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72785 00:06:54.877 04:22:54 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:54.877 04:22:54 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:54.877 04:22:54 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72785 00:06:54.877 killing process with pid 72785 00:06:54.877 04:22:54 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:54.877 04:22:54 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:54.877 04:22:54 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72785' 00:06:54.877 04:22:54 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 72785 00:06:54.877 04:22:54 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 72785 00:06:55.446 04:22:55 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:55.446 Process with pid 72771 is not found 00:06:55.446 04:22:55 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:55.446 04:22:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72771 ]] 00:06:55.446 04:22:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72771 00:06:55.446 04:22:55 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72771 ']' 00:06:55.446 04:22:55 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72771 00:06:55.446 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72771) - No such process 00:06:55.446 04:22:55 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 72771 is not found' 00:06:55.446 Process with pid 72785 is not found 00:06:55.446 04:22:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72785 ]] 00:06:55.446 04:22:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72785 00:06:55.446 04:22:55 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72785 ']' 00:06:55.446 04:22:55 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72785 00:06:55.446 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72785) - No such process 00:06:55.446 04:22:55 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 72785 is not found' 00:06:55.446 04:22:55 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:55.446 00:06:55.446 real 0m21.467s 00:06:55.446 user 0m34.827s 00:06:55.446 sys 0m6.850s 00:06:55.446 04:22:55 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.446 ************************************ 00:06:55.446 END TEST cpu_locks 00:06:55.446 ************************************ 00:06:55.446 04:22:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.446 ************************************ 00:06:55.446 END TEST event 00:06:55.446 ************************************ 00:06:55.446 00:06:55.446 real 0m49.652s 00:06:55.446 user 1m33.744s 00:06:55.446 sys 0m10.524s 00:06:55.446 04:22:55 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.446 04:22:55 event -- common/autotest_common.sh@10 -- # set +x 00:06:55.446 04:22:55 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:55.446 04:22:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.446 04:22:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.446 04:22:55 -- common/autotest_common.sh@10 -- # set +x 00:06:55.446 ************************************ 00:06:55.446 START TEST thread 00:06:55.446 ************************************ 00:06:55.446 04:22:55 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:55.705 * Looking for test storage... 00:06:55.705 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:55.705 04:22:55 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:55.705 04:22:55 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:55.705 04:22:55 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:55.705 04:22:55 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:55.705 04:22:55 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.705 04:22:55 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.705 04:22:55 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.705 04:22:55 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.705 04:22:55 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.705 04:22:55 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.705 04:22:55 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.705 04:22:55 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.705 04:22:55 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.705 04:22:55 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.705 04:22:55 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.705 04:22:55 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:55.705 04:22:55 thread -- scripts/common.sh@345 -- # : 1 00:06:55.705 04:22:55 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.705 04:22:55 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.705 04:22:55 thread -- scripts/common.sh@365 -- # decimal 1 00:06:55.705 04:22:55 thread -- scripts/common.sh@353 -- # local d=1 00:06:55.705 04:22:55 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.705 04:22:55 thread -- scripts/common.sh@355 -- # echo 1 00:06:55.705 04:22:55 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.705 04:22:55 thread -- scripts/common.sh@366 -- # decimal 2 00:06:55.705 04:22:55 thread -- scripts/common.sh@353 -- # local d=2 00:06:55.705 04:22:55 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.705 04:22:55 thread -- scripts/common.sh@355 -- # echo 2 00:06:55.705 04:22:55 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.705 04:22:55 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.705 04:22:55 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.705 04:22:55 thread -- scripts/common.sh@368 -- # return 0 00:06:55.705 04:22:55 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.705 04:22:55 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:55.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.705 --rc genhtml_branch_coverage=1 00:06:55.705 --rc genhtml_function_coverage=1 00:06:55.705 --rc genhtml_legend=1 00:06:55.705 --rc geninfo_all_blocks=1 00:06:55.705 --rc geninfo_unexecuted_blocks=1 00:06:55.705 00:06:55.705 ' 00:06:55.706 04:22:55 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:55.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.706 --rc genhtml_branch_coverage=1 00:06:55.706 --rc genhtml_function_coverage=1 00:06:55.706 --rc genhtml_legend=1 00:06:55.706 --rc geninfo_all_blocks=1 00:06:55.706 --rc geninfo_unexecuted_blocks=1 00:06:55.706 00:06:55.706 ' 00:06:55.706 04:22:55 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:55.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.706 --rc genhtml_branch_coverage=1 00:06:55.706 --rc genhtml_function_coverage=1 00:06:55.706 --rc genhtml_legend=1 00:06:55.706 --rc geninfo_all_blocks=1 00:06:55.706 --rc geninfo_unexecuted_blocks=1 00:06:55.706 00:06:55.706 ' 00:06:55.706 04:22:55 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:55.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.706 --rc genhtml_branch_coverage=1 00:06:55.706 --rc genhtml_function_coverage=1 00:06:55.706 --rc genhtml_legend=1 00:06:55.706 --rc geninfo_all_blocks=1 00:06:55.706 --rc geninfo_unexecuted_blocks=1 00:06:55.706 00:06:55.706 ' 00:06:55.706 04:22:55 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:55.706 04:22:55 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:55.706 04:22:55 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.706 04:22:55 thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.706 ************************************ 00:06:55.706 START TEST thread_poller_perf 00:06:55.706 ************************************ 00:06:55.706 04:22:55 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:55.706 [2024-12-13 04:22:55.658359] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:55.706 [2024-12-13 04:22:55.658635] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72929 ] 00:06:55.965 [2024-12-13 04:22:55.813669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.965 [2024-12-13 04:22:55.853197] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.965 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:57.346 [2024-12-13T04:22:57.361Z] ====================================== 00:06:57.346 [2024-12-13T04:22:57.361Z] busy:2298599640 (cyc) 00:06:57.346 [2024-12-13T04:22:57.361Z] total_run_count: 427000 00:06:57.346 [2024-12-13T04:22:57.361Z] tsc_hz: 2290000000 (cyc) 00:06:57.346 [2024-12-13T04:22:57.361Z] ====================================== 00:06:57.346 [2024-12-13T04:22:57.361Z] poller_cost: 5383 (cyc), 2350 (nsec) 00:06:57.346 00:06:57.346 real 0m1.323s 00:06:57.346 user 0m1.136s 00:06:57.346 sys 0m0.080s 00:06:57.346 04:22:56 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.346 04:22:56 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:57.346 ************************************ 00:06:57.346 END TEST thread_poller_perf 00:06:57.346 ************************************ 00:06:57.346 04:22:56 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:57.346 04:22:57 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:57.346 04:22:57 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.346 04:22:57 thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.346 ************************************ 00:06:57.346 START TEST thread_poller_perf 00:06:57.346 ************************************ 00:06:57.346 04:22:57 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:57.346 [2024-12-13 04:22:57.055038] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:57.346 [2024-12-13 04:22:57.055177] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72965 ] 00:06:57.346 [2024-12-13 04:22:57.210706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.346 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:57.346 [2024-12-13 04:22:57.247618] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.727 [2024-12-13T04:22:58.742Z] ====================================== 00:06:58.727 [2024-12-13T04:22:58.742Z] busy:2293464828 (cyc) 00:06:58.727 [2024-12-13T04:22:58.742Z] total_run_count: 4913000 00:06:58.727 [2024-12-13T04:22:58.742Z] tsc_hz: 2290000000 (cyc) 00:06:58.727 [2024-12-13T04:22:58.742Z] ====================================== 00:06:58.727 [2024-12-13T04:22:58.742Z] poller_cost: 466 (cyc), 203 (nsec) 00:06:58.727 00:06:58.727 real 0m1.308s 00:06:58.727 user 0m1.127s 00:06:58.727 sys 0m0.075s 00:06:58.728 04:22:58 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.728 04:22:58 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:58.728 ************************************ 00:06:58.728 END TEST thread_poller_perf 00:06:58.728 ************************************ 00:06:58.728 04:22:58 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:58.728 ************************************ 00:06:58.728 END TEST thread 00:06:58.728 ************************************ 00:06:58.728 00:06:58.728 real 0m3.007s 00:06:58.728 user 0m2.420s 00:06:58.728 sys 0m0.384s 00:06:58.728 04:22:58 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.728 04:22:58 thread -- common/autotest_common.sh@10 -- # set +x 00:06:58.728 04:22:58 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:58.728 04:22:58 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:58.728 04:22:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:58.728 04:22:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.728 04:22:58 -- common/autotest_common.sh@10 -- # set +x 00:06:58.728 ************************************ 00:06:58.728 START TEST app_cmdline 00:06:58.728 ************************************ 00:06:58.728 04:22:58 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:58.728 * Looking for test storage... 00:06:58.728 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:58.728 04:22:58 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:58.728 04:22:58 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:58.728 04:22:58 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:58.728 04:22:58 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:58.728 04:22:58 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:58.728 04:22:58 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:58.728 04:22:58 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:58.728 04:22:58 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:58.728 04:22:58 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:58.728 04:22:58 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:58.728 04:22:58 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:58.728 04:22:58 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:58.728 04:22:58 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:58.728 04:22:58 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:58.728 04:22:58 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:58.728 04:22:58 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:58.728 04:22:58 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:58.728 04:22:58 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:58.728 04:22:58 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:58.728 04:22:58 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:58.728 04:22:58 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:58.728 04:22:58 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:58.728 04:22:58 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:58.728 04:22:58 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:58.728 04:22:58 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:58.728 04:22:58 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:58.728 04:22:58 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:58.728 04:22:58 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:58.728 04:22:58 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:58.728 04:22:58 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:58.728 04:22:58 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:58.728 04:22:58 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:58.728 04:22:58 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:58.728 04:22:58 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:58.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.728 --rc genhtml_branch_coverage=1 00:06:58.728 --rc genhtml_function_coverage=1 00:06:58.728 --rc genhtml_legend=1 00:06:58.728 --rc geninfo_all_blocks=1 00:06:58.728 --rc geninfo_unexecuted_blocks=1 00:06:58.728 00:06:58.728 ' 00:06:58.728 04:22:58 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:58.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.728 --rc genhtml_branch_coverage=1 00:06:58.728 --rc genhtml_function_coverage=1 00:06:58.728 --rc genhtml_legend=1 00:06:58.728 --rc geninfo_all_blocks=1 00:06:58.728 --rc geninfo_unexecuted_blocks=1 00:06:58.728 00:06:58.728 ' 00:06:58.728 04:22:58 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:58.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.728 --rc genhtml_branch_coverage=1 00:06:58.728 --rc genhtml_function_coverage=1 00:06:58.728 --rc genhtml_legend=1 00:06:58.728 --rc geninfo_all_blocks=1 00:06:58.728 --rc geninfo_unexecuted_blocks=1 00:06:58.728 00:06:58.728 ' 00:06:58.728 04:22:58 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:58.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.728 --rc genhtml_branch_coverage=1 00:06:58.728 --rc genhtml_function_coverage=1 00:06:58.728 --rc genhtml_legend=1 00:06:58.728 --rc geninfo_all_blocks=1 00:06:58.728 --rc geninfo_unexecuted_blocks=1 00:06:58.728 00:06:58.728 ' 00:06:58.728 04:22:58 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:58.728 04:22:58 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=73049 00:06:58.728 04:22:58 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:58.728 04:22:58 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 73049 00:06:58.728 04:22:58 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 73049 ']' 00:06:58.728 04:22:58 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.728 04:22:58 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.728 04:22:58 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.728 04:22:58 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.728 04:22:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:58.988 [2024-12-13 04:22:58.779054] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:58.988 [2024-12-13 04:22:58.779198] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73049 ] 00:06:58.988 [2024-12-13 04:22:58.932659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.988 [2024-12-13 04:22:58.971712] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.927 04:22:59 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:59.927 04:22:59 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:59.927 04:22:59 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:59.927 { 00:06:59.927 "version": "SPDK v25.01-pre git sha1 e01cb43b8", 00:06:59.927 "fields": { 00:06:59.927 "major": 25, 00:06:59.927 "minor": 1, 00:06:59.927 "patch": 0, 00:06:59.927 "suffix": "-pre", 00:06:59.927 "commit": "e01cb43b8" 00:06:59.927 } 00:06:59.927 } 00:06:59.927 04:22:59 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:59.927 04:22:59 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:59.927 04:22:59 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:59.927 04:22:59 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:59.927 04:22:59 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:59.927 04:22:59 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.927 04:22:59 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:59.927 04:22:59 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:59.927 04:22:59 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:59.927 04:22:59 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.927 04:22:59 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:59.927 04:22:59 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:59.927 04:22:59 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:59.927 04:22:59 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:59.927 04:22:59 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:59.927 04:22:59 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:59.927 04:22:59 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:59.927 04:22:59 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:59.927 04:22:59 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:59.927 04:22:59 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:59.927 04:22:59 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:59.927 04:22:59 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:59.927 04:22:59 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:59.927 04:22:59 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:00.187 request: 00:07:00.187 { 00:07:00.187 "method": "env_dpdk_get_mem_stats", 00:07:00.187 "req_id": 1 00:07:00.187 } 00:07:00.187 Got JSON-RPC error response 00:07:00.187 response: 00:07:00.187 { 00:07:00.187 "code": -32601, 00:07:00.187 "message": "Method not found" 00:07:00.187 } 00:07:00.187 04:23:00 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:00.187 04:23:00 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:00.187 04:23:00 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:00.187 04:23:00 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:00.187 04:23:00 app_cmdline -- app/cmdline.sh@1 -- # killprocess 73049 00:07:00.187 04:23:00 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 73049 ']' 00:07:00.187 04:23:00 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 73049 00:07:00.187 04:23:00 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:00.187 04:23:00 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:00.187 04:23:00 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73049 00:07:00.187 killing process with pid 73049 00:07:00.187 04:23:00 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:00.187 04:23:00 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:00.187 04:23:00 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73049' 00:07:00.187 04:23:00 app_cmdline -- common/autotest_common.sh@973 -- # kill 73049 00:07:00.187 04:23:00 app_cmdline -- common/autotest_common.sh@978 -- # wait 73049 00:07:00.755 00:07:00.755 real 0m2.210s 00:07:00.755 user 0m2.274s 00:07:00.755 sys 0m0.689s 00:07:00.755 ************************************ 00:07:00.755 END TEST app_cmdline 00:07:00.755 ************************************ 00:07:00.755 04:23:00 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.755 04:23:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:00.755 04:23:00 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:00.755 04:23:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:00.755 04:23:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.756 04:23:00 -- common/autotest_common.sh@10 -- # set +x 00:07:00.756 ************************************ 00:07:00.756 START TEST version 00:07:00.756 ************************************ 00:07:00.756 04:23:00 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:01.015 * Looking for test storage... 00:07:01.015 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:01.015 04:23:00 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:01.015 04:23:00 version -- common/autotest_common.sh@1711 -- # lcov --version 00:07:01.015 04:23:00 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:01.015 04:23:00 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:01.015 04:23:00 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.015 04:23:00 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.015 04:23:00 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.015 04:23:00 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.015 04:23:00 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.015 04:23:00 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.015 04:23:00 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.015 04:23:00 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.015 04:23:00 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.015 04:23:00 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.015 04:23:00 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.015 04:23:00 version -- scripts/common.sh@344 -- # case "$op" in 00:07:01.015 04:23:00 version -- scripts/common.sh@345 -- # : 1 00:07:01.015 04:23:00 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.015 04:23:00 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.015 04:23:00 version -- scripts/common.sh@365 -- # decimal 1 00:07:01.015 04:23:00 version -- scripts/common.sh@353 -- # local d=1 00:07:01.015 04:23:00 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.015 04:23:00 version -- scripts/common.sh@355 -- # echo 1 00:07:01.015 04:23:00 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.015 04:23:00 version -- scripts/common.sh@366 -- # decimal 2 00:07:01.015 04:23:00 version -- scripts/common.sh@353 -- # local d=2 00:07:01.015 04:23:00 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.015 04:23:00 version -- scripts/common.sh@355 -- # echo 2 00:07:01.015 04:23:00 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.015 04:23:00 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.015 04:23:00 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.015 04:23:00 version -- scripts/common.sh@368 -- # return 0 00:07:01.015 04:23:00 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.015 04:23:00 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:01.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.015 --rc genhtml_branch_coverage=1 00:07:01.015 --rc genhtml_function_coverage=1 00:07:01.015 --rc genhtml_legend=1 00:07:01.015 --rc geninfo_all_blocks=1 00:07:01.015 --rc geninfo_unexecuted_blocks=1 00:07:01.015 00:07:01.015 ' 00:07:01.015 04:23:00 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:01.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.015 --rc genhtml_branch_coverage=1 00:07:01.015 --rc genhtml_function_coverage=1 00:07:01.015 --rc genhtml_legend=1 00:07:01.015 --rc geninfo_all_blocks=1 00:07:01.015 --rc geninfo_unexecuted_blocks=1 00:07:01.015 00:07:01.015 ' 00:07:01.015 04:23:00 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:01.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.015 --rc genhtml_branch_coverage=1 00:07:01.015 --rc genhtml_function_coverage=1 00:07:01.015 --rc genhtml_legend=1 00:07:01.015 --rc geninfo_all_blocks=1 00:07:01.015 --rc geninfo_unexecuted_blocks=1 00:07:01.015 00:07:01.015 ' 00:07:01.015 04:23:00 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:01.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.015 --rc genhtml_branch_coverage=1 00:07:01.015 --rc genhtml_function_coverage=1 00:07:01.015 --rc genhtml_legend=1 00:07:01.015 --rc geninfo_all_blocks=1 00:07:01.015 --rc geninfo_unexecuted_blocks=1 00:07:01.015 00:07:01.015 ' 00:07:01.015 04:23:00 version -- app/version.sh@17 -- # get_header_version major 00:07:01.015 04:23:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:01.015 04:23:00 version -- app/version.sh@14 -- # cut -f2 00:07:01.015 04:23:00 version -- app/version.sh@14 -- # tr -d '"' 00:07:01.015 04:23:00 version -- app/version.sh@17 -- # major=25 00:07:01.015 04:23:00 version -- app/version.sh@18 -- # get_header_version minor 00:07:01.015 04:23:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:01.015 04:23:00 version -- app/version.sh@14 -- # cut -f2 00:07:01.015 04:23:00 version -- app/version.sh@14 -- # tr -d '"' 00:07:01.015 04:23:00 version -- app/version.sh@18 -- # minor=1 00:07:01.015 04:23:00 version -- app/version.sh@19 -- # get_header_version patch 00:07:01.015 04:23:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:01.015 04:23:00 version -- app/version.sh@14 -- # cut -f2 00:07:01.015 04:23:00 version -- app/version.sh@14 -- # tr -d '"' 00:07:01.015 04:23:00 version -- app/version.sh@19 -- # patch=0 00:07:01.015 04:23:00 version -- app/version.sh@20 -- # get_header_version suffix 00:07:01.015 04:23:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:01.015 04:23:00 version -- app/version.sh@14 -- # cut -f2 00:07:01.015 04:23:00 version -- app/version.sh@14 -- # tr -d '"' 00:07:01.015 04:23:01 version -- app/version.sh@20 -- # suffix=-pre 00:07:01.015 04:23:01 version -- app/version.sh@22 -- # version=25.1 00:07:01.015 04:23:01 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:01.015 04:23:01 version -- app/version.sh@28 -- # version=25.1rc0 00:07:01.015 04:23:01 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:01.015 04:23:01 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:01.275 04:23:01 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:01.275 04:23:01 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:01.275 ************************************ 00:07:01.275 END TEST version 00:07:01.275 ************************************ 00:07:01.275 00:07:01.275 real 0m0.326s 00:07:01.275 user 0m0.187s 00:07:01.275 sys 0m0.197s 00:07:01.275 04:23:01 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.275 04:23:01 version -- common/autotest_common.sh@10 -- # set +x 00:07:01.275 04:23:01 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:01.275 04:23:01 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:07:01.275 04:23:01 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:01.275 04:23:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:01.275 04:23:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.275 04:23:01 -- common/autotest_common.sh@10 -- # set +x 00:07:01.275 ************************************ 00:07:01.275 START TEST bdev_raid 00:07:01.275 ************************************ 00:07:01.275 04:23:01 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:01.275 * Looking for test storage... 00:07:01.275 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:01.275 04:23:01 bdev_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:01.275 04:23:01 bdev_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:01.275 04:23:01 bdev_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:07:01.535 04:23:01 bdev_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:01.535 04:23:01 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.535 04:23:01 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.535 04:23:01 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.535 04:23:01 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.535 04:23:01 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.535 04:23:01 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.535 04:23:01 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.535 04:23:01 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.535 04:23:01 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.535 04:23:01 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.535 04:23:01 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.535 04:23:01 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:07:01.535 04:23:01 bdev_raid -- scripts/common.sh@345 -- # : 1 00:07:01.535 04:23:01 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.535 04:23:01 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.535 04:23:01 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:07:01.535 04:23:01 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:07:01.535 04:23:01 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.535 04:23:01 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:07:01.535 04:23:01 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.535 04:23:01 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:07:01.535 04:23:01 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:07:01.535 04:23:01 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.535 04:23:01 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:07:01.535 04:23:01 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.535 04:23:01 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.535 04:23:01 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.535 04:23:01 bdev_raid -- scripts/common.sh@368 -- # return 0 00:07:01.535 04:23:01 bdev_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.535 04:23:01 bdev_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:01.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.535 --rc genhtml_branch_coverage=1 00:07:01.535 --rc genhtml_function_coverage=1 00:07:01.535 --rc genhtml_legend=1 00:07:01.535 --rc geninfo_all_blocks=1 00:07:01.535 --rc geninfo_unexecuted_blocks=1 00:07:01.535 00:07:01.535 ' 00:07:01.535 04:23:01 bdev_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:01.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.535 --rc genhtml_branch_coverage=1 00:07:01.535 --rc genhtml_function_coverage=1 00:07:01.535 --rc genhtml_legend=1 00:07:01.535 --rc geninfo_all_blocks=1 00:07:01.535 --rc geninfo_unexecuted_blocks=1 00:07:01.535 00:07:01.535 ' 00:07:01.535 04:23:01 bdev_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:01.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.535 --rc genhtml_branch_coverage=1 00:07:01.535 --rc genhtml_function_coverage=1 00:07:01.535 --rc genhtml_legend=1 00:07:01.535 --rc geninfo_all_blocks=1 00:07:01.535 --rc geninfo_unexecuted_blocks=1 00:07:01.535 00:07:01.535 ' 00:07:01.535 04:23:01 bdev_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:01.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.535 --rc genhtml_branch_coverage=1 00:07:01.535 --rc genhtml_function_coverage=1 00:07:01.535 --rc genhtml_legend=1 00:07:01.535 --rc geninfo_all_blocks=1 00:07:01.535 --rc geninfo_unexecuted_blocks=1 00:07:01.535 00:07:01.535 ' 00:07:01.535 04:23:01 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:01.535 04:23:01 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:07:01.535 04:23:01 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:07:01.535 04:23:01 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:07:01.535 04:23:01 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:07:01.535 04:23:01 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:07:01.535 04:23:01 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:07:01.535 04:23:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:01.535 04:23:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.535 04:23:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:01.535 ************************************ 00:07:01.535 START TEST raid1_resize_data_offset_test 00:07:01.535 ************************************ 00:07:01.535 04:23:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:07:01.535 04:23:01 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=73214 00:07:01.535 04:23:01 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 73214' 00:07:01.535 Process raid pid: 73214 00:07:01.535 04:23:01 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:01.535 04:23:01 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 73214 00:07:01.535 04:23:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 73214 ']' 00:07:01.535 04:23:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.535 04:23:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.535 04:23:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.535 04:23:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.535 04:23:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.535 [2024-12-13 04:23:01.466735] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:01.535 [2024-12-13 04:23:01.466935] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:01.793 [2024-12-13 04:23:01.625056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.793 [2024-12-13 04:23:01.664378] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.793 [2024-12-13 04:23:01.743282] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:01.793 [2024-12-13 04:23:01.743397] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:02.361 04:23:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.361 04:23:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:07:02.361 04:23:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:07:02.361 04:23:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.361 04:23:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.361 malloc0 00:07:02.361 04:23:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.361 04:23:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:07:02.361 04:23:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.361 04:23:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.621 malloc1 00:07:02.621 04:23:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.621 04:23:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:07:02.621 04:23:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.621 04:23:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.621 null0 00:07:02.621 04:23:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.621 04:23:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:07:02.621 04:23:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.621 04:23:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.621 [2024-12-13 04:23:02.406642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:07:02.621 [2024-12-13 04:23:02.408961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:02.621 [2024-12-13 04:23:02.409089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:07:02.621 [2024-12-13 04:23:02.409253] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:02.621 [2024-12-13 04:23:02.409268] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:07:02.621 [2024-12-13 04:23:02.409583] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:07:02.621 [2024-12-13 04:23:02.409738] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:02.621 [2024-12-13 04:23:02.409759] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:07:02.621 [2024-12-13 04:23:02.409901] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:02.621 04:23:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.621 04:23:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:02.621 04:23:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:02.621 04:23:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.621 04:23:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.621 04:23:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.621 04:23:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:07:02.621 04:23:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:07:02.621 04:23:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.621 04:23:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.621 [2024-12-13 04:23:02.466537] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:07:02.621 04:23:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.621 04:23:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:07:02.621 04:23:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.621 04:23:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.880 malloc2 00:07:02.880 04:23:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.881 04:23:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:07:02.881 04:23:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.881 04:23:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.881 [2024-12-13 04:23:02.683591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:02.881 [2024-12-13 04:23:02.693660] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:02.881 04:23:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.881 [2024-12-13 04:23:02.696208] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:07:02.881 04:23:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:02.881 04:23:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.881 04:23:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.881 04:23:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:02.881 04:23:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.881 04:23:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:07:02.881 04:23:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 73214 00:07:02.881 04:23:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 73214 ']' 00:07:02.881 04:23:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 73214 00:07:02.881 04:23:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:07:02.881 04:23:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:02.881 04:23:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73214 00:07:02.881 killing process with pid 73214 00:07:02.881 04:23:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:02.881 04:23:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:02.881 04:23:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73214' 00:07:02.881 04:23:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 73214 00:07:02.881 04:23:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 73214 00:07:02.881 [2024-12-13 04:23:02.791888] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:02.881 [2024-12-13 04:23:02.793572] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:07:02.881 [2024-12-13 04:23:02.793657] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:02.881 [2024-12-13 04:23:02.793675] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:07:02.881 [2024-12-13 04:23:02.802437] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:02.881 [2024-12-13 04:23:02.802786] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:02.881 [2024-12-13 04:23:02.802804] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:07:03.449 [2024-12-13 04:23:03.190671] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:03.709 04:23:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:07:03.709 00:07:03.709 real 0m2.130s 00:07:03.709 user 0m1.939s 00:07:03.709 sys 0m0.637s 00:07:03.709 04:23:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.709 04:23:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.709 ************************************ 00:07:03.709 END TEST raid1_resize_data_offset_test 00:07:03.709 ************************************ 00:07:03.709 04:23:03 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:07:03.709 04:23:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:03.709 04:23:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.709 04:23:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:03.709 ************************************ 00:07:03.709 START TEST raid0_resize_superblock_test 00:07:03.709 ************************************ 00:07:03.709 04:23:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:07:03.709 04:23:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:07:03.709 04:23:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=73270 00:07:03.709 04:23:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:03.709 Process raid pid: 73270 00:07:03.709 04:23:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 73270' 00:07:03.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.709 04:23:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 73270 00:07:03.709 04:23:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 73270 ']' 00:07:03.709 04:23:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.709 04:23:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.709 04:23:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.709 04:23:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.709 04:23:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.709 [2024-12-13 04:23:03.664416] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:03.709 [2024-12-13 04:23:03.664653] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:03.968 [2024-12-13 04:23:03.822602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.968 [2024-12-13 04:23:03.861892] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.968 [2024-12-13 04:23:03.938875] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:03.968 [2024-12-13 04:23:03.939023] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:04.542 04:23:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.542 04:23:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:04.542 04:23:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:04.542 04:23:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.542 04:23:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.812 malloc0 00:07:04.812 04:23:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.812 04:23:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:04.812 04:23:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.812 04:23:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.812 [2024-12-13 04:23:04.706766] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:04.812 [2024-12-13 04:23:04.706923] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:04.812 [2024-12-13 04:23:04.706967] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:07:04.812 [2024-12-13 04:23:04.707017] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:04.812 [2024-12-13 04:23:04.709580] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:04.812 [2024-12-13 04:23:04.709661] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:04.812 pt0 00:07:04.812 04:23:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.812 04:23:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:04.812 04:23:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.812 04:23:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.076 92415938-a8c6-4588-bdf6-9cc9681acd66 00:07:05.076 04:23:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.076 04:23:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:05.076 04:23:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.076 04:23:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.076 bbf1ec7a-d8d6-4ad6-b9ee-009a2d655b20 00:07:05.076 04:23:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.076 04:23:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:05.076 04:23:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.076 04:23:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.076 aef49e29-d602-49b1-a9c5-bb2c5505f152 00:07:05.076 04:23:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.076 04:23:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:05.076 04:23:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:05.076 04:23:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.076 04:23:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.076 [2024-12-13 04:23:04.914867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev bbf1ec7a-d8d6-4ad6-b9ee-009a2d655b20 is claimed 00:07:05.076 [2024-12-13 04:23:04.914969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev aef49e29-d602-49b1-a9c5-bb2c5505f152 is claimed 00:07:05.076 [2024-12-13 04:23:04.915077] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:05.076 [2024-12-13 04:23:04.915090] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:07:05.076 [2024-12-13 04:23:04.915419] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:05.076 [2024-12-13 04:23:04.915634] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:05.076 [2024-12-13 04:23:04.915647] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:07:05.076 [2024-12-13 04:23:04.915772] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:05.076 04:23:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.076 04:23:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:05.076 04:23:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:05.076 04:23:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.076 04:23:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.076 04:23:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.076 04:23:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:05.076 04:23:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:05.076 04:23:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.076 04:23:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.076 04:23:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:05.076 04:23:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.076 04:23:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:05.076 04:23:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:05.076 04:23:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:05.076 04:23:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:05.076 04:23:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:07:05.076 04:23:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.076 04:23:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.076 [2024-12-13 04:23:05.030862] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:05.076 04:23:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.076 04:23:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:05.076 04:23:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:05.076 04:23:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:07:05.076 04:23:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:05.077 04:23:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.077 04:23:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.077 [2024-12-13 04:23:05.078732] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:05.077 [2024-12-13 04:23:05.078757] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'bbf1ec7a-d8d6-4ad6-b9ee-009a2d655b20' was resized: old size 131072, new size 204800 00:07:05.077 04:23:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.077 04:23:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:05.077 04:23:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.077 04:23:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.077 [2024-12-13 04:23:05.090662] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:05.077 [2024-12-13 04:23:05.090684] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'aef49e29-d602-49b1-a9c5-bb2c5505f152' was resized: old size 131072, new size 204800 00:07:05.077 [2024-12-13 04:23:05.090713] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.335 [2024-12-13 04:23:05.206586] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.335 [2024-12-13 04:23:05.254311] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:05.335 [2024-12-13 04:23:05.254438] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:05.335 [2024-12-13 04:23:05.254464] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:05.335 [2024-12-13 04:23:05.254476] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:05.335 [2024-12-13 04:23:05.254600] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:05.335 [2024-12-13 04:23:05.254638] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:05.335 [2024-12-13 04:23:05.254650] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.335 [2024-12-13 04:23:05.266239] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:05.335 [2024-12-13 04:23:05.266310] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:05.335 [2024-12-13 04:23:05.266329] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:05.335 [2024-12-13 04:23:05.266340] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:05.335 [2024-12-13 04:23:05.268731] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:05.335 [2024-12-13 04:23:05.268770] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:05.335 [2024-12-13 04:23:05.270182] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev bbf1ec7a-d8d6-4ad6-b9ee-009a2d655b20 00:07:05.335 [2024-12-13 04:23:05.270238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev bbf1ec7a-d8d6-4ad6-b9ee-009a2d655b20 is claimed 00:07:05.335 [2024-12-13 04:23:05.270314] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev aef49e29-d602-49b1-a9c5-bb2c5505f152 00:07:05.335 [2024-12-13 04:23:05.270344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev aef49e29-d602-49b1-a9c5-bb2c5505f152 is claimed 00:07:05.335 [2024-12-13 04:23:05.270471] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev aef49e29-d602-49b1-a9c5-bb2c5505f152 (2) smaller than existing raid bdev Raid (3) 00:07:05.335 [2024-12-13 04:23:05.270494] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev bbf1ec7a-d8d6-4ad6-b9ee-009a2d655b20: File exists 00:07:05.335 [2024-12-13 04:23:05.270526] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:07:05.335 [2024-12-13 04:23:05.270537] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:07:05.335 [2024-12-13 04:23:05.270759] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:07:05.335 pt0 00:07:05.335 [2024-12-13 04:23:05.270907] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:07:05.335 [2024-12-13 04:23:05.270917] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001580 00:07:05.335 [2024-12-13 04:23:05.271025] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.335 [2024-12-13 04:23:05.294409] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 73270 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 73270 ']' 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 73270 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:05.335 04:23:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73270 00:07:05.595 killing process with pid 73270 00:07:05.595 04:23:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:05.595 04:23:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:05.595 04:23:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73270' 00:07:05.595 04:23:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 73270 00:07:05.595 [2024-12-13 04:23:05.377283] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:05.595 [2024-12-13 04:23:05.377354] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:05.595 [2024-12-13 04:23:05.377393] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:05.595 [2024-12-13 04:23:05.377401] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Raid, state offline 00:07:05.595 04:23:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 73270 00:07:05.854 [2024-12-13 04:23:05.680117] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:06.114 04:23:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:06.114 00:07:06.114 real 0m2.420s 00:07:06.114 user 0m2.543s 00:07:06.114 sys 0m0.666s 00:07:06.114 ************************************ 00:07:06.114 END TEST raid0_resize_superblock_test 00:07:06.114 ************************************ 00:07:06.114 04:23:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.114 04:23:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.114 04:23:06 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:07:06.114 04:23:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:06.114 04:23:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.114 04:23:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:06.114 ************************************ 00:07:06.114 START TEST raid1_resize_superblock_test 00:07:06.114 ************************************ 00:07:06.114 04:23:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:07:06.114 04:23:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:07:06.114 04:23:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=73347 00:07:06.114 04:23:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:06.114 04:23:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 73347' 00:07:06.114 Process raid pid: 73347 00:07:06.114 04:23:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 73347 00:07:06.114 04:23:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 73347 ']' 00:07:06.114 04:23:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.114 04:23:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.114 04:23:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.114 04:23:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.114 04:23:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.373 [2024-12-13 04:23:06.157850] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:06.373 [2024-12-13 04:23:06.158048] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:06.373 [2024-12-13 04:23:06.290128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.373 [2024-12-13 04:23:06.328478] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.632 [2024-12-13 04:23:06.404071] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:06.632 [2024-12-13 04:23:06.404174] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:07.202 04:23:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:07.202 04:23:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:07.202 04:23:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:07.202 04:23:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.202 04:23:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.202 malloc0 00:07:07.202 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.202 04:23:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:07.202 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.202 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.202 [2024-12-13 04:23:07.209956] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:07.202 [2024-12-13 04:23:07.210031] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:07.202 [2024-12-13 04:23:07.210063] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:07:07.202 [2024-12-13 04:23:07.210078] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:07.202 [2024-12-13 04:23:07.212540] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:07.202 [2024-12-13 04:23:07.212655] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:07.202 pt0 00:07:07.202 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.202 04:23:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:07.202 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.202 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.462 84615f37-9368-4e61-b971-9414f254025a 00:07:07.462 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.462 04:23:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:07.462 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.462 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.462 99c53740-4b50-4a26-926e-8c445ad5cc6b 00:07:07.462 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.462 04:23:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:07.462 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.462 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.462 2666f0b4-0b80-4ed4-b1ac-ff6359e7ce92 00:07:07.462 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.462 04:23:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:07.462 04:23:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:07.462 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.462 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.462 [2024-12-13 04:23:07.417604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 99c53740-4b50-4a26-926e-8c445ad5cc6b is claimed 00:07:07.462 [2024-12-13 04:23:07.417795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 2666f0b4-0b80-4ed4-b1ac-ff6359e7ce92 is claimed 00:07:07.462 [2024-12-13 04:23:07.417929] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:07.462 [2024-12-13 04:23:07.417944] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:07:07.462 [2024-12-13 04:23:07.418264] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:07.462 [2024-12-13 04:23:07.418427] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:07.462 [2024-12-13 04:23:07.418438] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:07:07.462 [2024-12-13 04:23:07.418613] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:07.462 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.462 04:23:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:07.462 04:23:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:07.462 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.462 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.462 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.462 04:23:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:07.462 04:23:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:07.462 04:23:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:07.462 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.462 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.721 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.721 04:23:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:07.721 04:23:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:07.721 04:23:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:07:07.721 04:23:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:07.722 04:23:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:07.722 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.722 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.722 [2024-12-13 04:23:07.533634] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:07.722 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.722 04:23:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:07.722 04:23:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:07.722 04:23:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:07:07.722 04:23:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:07.722 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.722 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.722 [2024-12-13 04:23:07.581432] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:07.722 [2024-12-13 04:23:07.581529] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '99c53740-4b50-4a26-926e-8c445ad5cc6b' was resized: old size 131072, new size 204800 00:07:07.722 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.722 04:23:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:07.722 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.722 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.722 [2024-12-13 04:23:07.593365] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:07.722 [2024-12-13 04:23:07.593432] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '2666f0b4-0b80-4ed4-b1ac-ff6359e7ce92' was resized: old size 131072, new size 204800 00:07:07.722 [2024-12-13 04:23:07.593480] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:07:07.722 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.722 04:23:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:07.722 04:23:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:07.722 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.722 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.722 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.722 04:23:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:07.722 04:23:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:07.722 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.722 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.722 04:23:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:07.722 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.722 04:23:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:07.722 04:23:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:07.722 04:23:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:07.722 04:23:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:07.722 04:23:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:07:07.722 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.722 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.722 [2024-12-13 04:23:07.705279] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:07.722 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.982 04:23:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:07.982 04:23:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:07.982 04:23:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:07:07.982 04:23:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:07.982 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.982 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.982 [2024-12-13 04:23:07.753016] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:07.982 [2024-12-13 04:23:07.753082] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:07.982 [2024-12-13 04:23:07.753131] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:07.982 [2024-12-13 04:23:07.753268] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:07.982 [2024-12-13 04:23:07.753398] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:07.982 [2024-12-13 04:23:07.753449] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:07.982 [2024-12-13 04:23:07.753475] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:07:07.982 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.982 04:23:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:07.982 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.982 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.982 [2024-12-13 04:23:07.764952] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:07.982 [2024-12-13 04:23:07.765002] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:07.982 [2024-12-13 04:23:07.765035] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:07.982 [2024-12-13 04:23:07.765048] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:07.982 [2024-12-13 04:23:07.767464] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:07.982 [2024-12-13 04:23:07.767498] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:07.982 [2024-12-13 04:23:07.768932] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 99c53740-4b50-4a26-926e-8c445ad5cc6b 00:07:07.982 [2024-12-13 04:23:07.768992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 99c53740-4b50-4a26-926e-8c445ad5cc6b is claimed 00:07:07.982 [2024-12-13 04:23:07.769070] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 2666f0b4-0b80-4ed4-b1ac-ff6359e7ce92 00:07:07.982 [2024-12-13 04:23:07.769103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 2666f0b4-0b80-4ed4-b1ac-ff6359e7ce92 is claimed 00:07:07.982 [2024-12-13 04:23:07.769220] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 2666f0b4-0b80-4ed4-b1ac-ff6359e7ce92 (2) smaller than existing raid bdev Raid (3) 00:07:07.982 [2024-12-13 04:23:07.769242] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 99c53740-4b50-4a26-926e-8c445ad5cc6b: File exists 00:07:07.982 [2024-12-13 04:23:07.769282] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:07:07.982 [2024-12-13 04:23:07.769291] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:07.982 [2024-12-13 04:23:07.769527] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:07:07.982 pt0 00:07:07.982 [2024-12-13 04:23:07.769686] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:07:07.982 [2024-12-13 04:23:07.769697] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001580 00:07:07.982 [2024-12-13 04:23:07.769806] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:07.982 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.982 04:23:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:07.982 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.982 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.982 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.982 04:23:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:07.982 04:23:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:07.982 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.982 04:23:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:07.982 04:23:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:07:07.982 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.982 [2024-12-13 04:23:07.793166] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:07.982 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.982 04:23:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:07.982 04:23:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:07.982 04:23:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:07:07.982 04:23:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 73347 00:07:07.982 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 73347 ']' 00:07:07.982 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 73347 00:07:07.982 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:07.982 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:07.982 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73347 00:07:07.982 killing process with pid 73347 00:07:07.982 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:07.983 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:07.983 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73347' 00:07:07.983 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 73347 00:07:07.983 [2024-12-13 04:23:07.875774] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:07.983 [2024-12-13 04:23:07.875828] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:07.983 [2024-12-13 04:23:07.875869] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:07.983 [2024-12-13 04:23:07.875878] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Raid, state offline 00:07:07.983 04:23:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 73347 00:07:08.242 [2024-12-13 04:23:08.180877] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:08.501 04:23:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:08.501 00:07:08.501 real 0m2.427s 00:07:08.501 user 0m2.585s 00:07:08.501 sys 0m0.641s 00:07:08.501 ************************************ 00:07:08.501 END TEST raid1_resize_superblock_test 00:07:08.501 ************************************ 00:07:08.501 04:23:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.501 04:23:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.760 04:23:08 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:07:08.760 04:23:08 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:07:08.760 04:23:08 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:07:08.760 04:23:08 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:07:08.760 04:23:08 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:07:08.760 04:23:08 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:07:08.760 04:23:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:08.760 04:23:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.760 04:23:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:08.760 ************************************ 00:07:08.760 START TEST raid_function_test_raid0 00:07:08.760 ************************************ 00:07:08.760 04:23:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:07:08.760 04:23:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:07:08.760 04:23:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:08.760 04:23:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:08.760 Process raid pid: 73427 00:07:08.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.760 04:23:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=73427 00:07:08.760 04:23:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:08.760 04:23:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 73427' 00:07:08.761 04:23:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 73427 00:07:08.761 04:23:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 73427 ']' 00:07:08.761 04:23:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.761 04:23:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:08.761 04:23:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.761 04:23:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:08.761 04:23:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:08.761 [2024-12-13 04:23:08.685099] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:08.761 [2024-12-13 04:23:08.685327] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:09.019 [2024-12-13 04:23:08.842402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.019 [2024-12-13 04:23:08.881593] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.019 [2024-12-13 04:23:08.959438] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:09.019 [2024-12-13 04:23:08.959593] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:09.588 04:23:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:09.588 04:23:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:07:09.588 04:23:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:09.588 04:23:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.588 04:23:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:09.588 Base_1 00:07:09.588 04:23:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.588 04:23:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:09.588 04:23:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.589 04:23:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:09.589 Base_2 00:07:09.589 04:23:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.589 04:23:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:07:09.589 04:23:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.589 04:23:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:09.589 [2024-12-13 04:23:09.561130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:09.589 [2024-12-13 04:23:09.563374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:09.589 [2024-12-13 04:23:09.563446] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:09.589 [2024-12-13 04:23:09.563469] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:09.589 [2024-12-13 04:23:09.563794] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:09.589 [2024-12-13 04:23:09.563958] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:09.589 [2024-12-13 04:23:09.563969] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000001200 00:07:09.589 [2024-12-13 04:23:09.564098] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:09.589 04:23:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.589 04:23:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:09.589 04:23:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.589 04:23:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:09.589 04:23:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:09.589 04:23:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.848 04:23:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:09.848 04:23:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:09.848 04:23:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:09.848 04:23:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:09.848 04:23:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:09.848 04:23:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:09.848 04:23:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:09.848 04:23:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:09.848 04:23:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:07:09.848 04:23:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:09.848 04:23:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:09.848 04:23:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:09.848 [2024-12-13 04:23:09.808676] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:07:09.848 /dev/nbd0 00:07:09.848 04:23:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:09.848 04:23:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:09.848 04:23:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:09.848 04:23:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:07:09.848 04:23:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:09.848 04:23:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:09.848 04:23:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:09.848 04:23:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:07:09.848 04:23:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:09.848 04:23:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:09.848 04:23:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:09.848 1+0 records in 00:07:09.848 1+0 records out 00:07:09.848 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000613587 s, 6.7 MB/s 00:07:09.848 04:23:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.108 04:23:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:07:10.108 04:23:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.108 04:23:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:10.108 04:23:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:07:10.108 04:23:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:10.108 04:23:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:10.108 04:23:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:10.108 04:23:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:10.108 04:23:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:10.108 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:10.108 { 00:07:10.108 "nbd_device": "/dev/nbd0", 00:07:10.108 "bdev_name": "raid" 00:07:10.108 } 00:07:10.108 ]' 00:07:10.108 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:10.108 { 00:07:10.108 "nbd_device": "/dev/nbd0", 00:07:10.108 "bdev_name": "raid" 00:07:10.108 } 00:07:10.108 ]' 00:07:10.108 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:10.108 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:10.108 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:10.367 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:10.367 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:07:10.367 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:07:10.367 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:07:10.367 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:10.367 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:10.367 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:10.367 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:10.367 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:10.367 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:10.367 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:10.367 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:10.367 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:10.367 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:10.367 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:10.367 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:10.367 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:10.367 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:10.367 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:10.367 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:10.367 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:10.367 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:10.367 4096+0 records in 00:07:10.367 4096+0 records out 00:07:10.367 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0364922 s, 57.5 MB/s 00:07:10.367 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:10.627 4096+0 records in 00:07:10.627 4096+0 records out 00:07:10.627 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.219127 s, 9.6 MB/s 00:07:10.627 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:10.627 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:10.627 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:10.627 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:10.627 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:10.627 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:10.627 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:10.627 128+0 records in 00:07:10.627 128+0 records out 00:07:10.627 65536 bytes (66 kB, 64 KiB) copied, 0.00132821 s, 49.3 MB/s 00:07:10.627 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:10.627 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:10.627 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:10.627 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:10.627 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:10.627 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:10.627 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:10.627 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:10.627 2035+0 records in 00:07:10.627 2035+0 records out 00:07:10.627 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0150185 s, 69.4 MB/s 00:07:10.627 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:10.627 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:10.627 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:10.627 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:10.627 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:10.627 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:10.628 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:10.628 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:10.628 456+0 records in 00:07:10.628 456+0 records out 00:07:10.628 233472 bytes (233 kB, 228 KiB) copied, 0.0039541 s, 59.0 MB/s 00:07:10.628 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:10.628 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:10.628 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:10.628 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:10.628 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:10.628 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:07:10.628 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:10.628 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:10.628 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:10.628 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:10.628 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:07:10.628 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.628 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:10.887 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:10.887 [2024-12-13 04:23:10.750768] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:10.887 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:10.887 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:10.887 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.887 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.887 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:10.887 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:07:10.887 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.887 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:10.887 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:10.887 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:11.147 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:11.147 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:11.147 04:23:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:11.147 04:23:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:11.147 04:23:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:11.147 04:23:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:11.147 04:23:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:07:11.147 04:23:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:07:11.147 04:23:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:11.147 04:23:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:07:11.147 04:23:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:11.147 04:23:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 73427 00:07:11.147 04:23:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 73427 ']' 00:07:11.147 04:23:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 73427 00:07:11.147 04:23:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:07:11.147 04:23:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:11.147 04:23:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73427 00:07:11.147 04:23:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:11.147 04:23:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:11.147 04:23:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73427' 00:07:11.147 killing process with pid 73427 00:07:11.147 04:23:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 73427 00:07:11.147 [2024-12-13 04:23:11.073345] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:11.147 [2024-12-13 04:23:11.073483] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:11.147 [2024-12-13 04:23:11.073542] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:11.147 04:23:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 73427 00:07:11.147 [2024-12-13 04:23:11.073556] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid, state offline 00:07:11.147 [2024-12-13 04:23:11.115966] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:11.716 04:23:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:07:11.716 00:07:11.716 real 0m2.843s 00:07:11.716 user 0m3.381s 00:07:11.716 sys 0m1.024s 00:07:11.716 ************************************ 00:07:11.716 END TEST raid_function_test_raid0 00:07:11.716 ************************************ 00:07:11.716 04:23:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.716 04:23:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:11.716 04:23:11 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:07:11.716 04:23:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:11.716 04:23:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.716 04:23:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:11.716 ************************************ 00:07:11.716 START TEST raid_function_test_concat 00:07:11.716 ************************************ 00:07:11.716 04:23:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:07:11.716 04:23:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:07:11.716 04:23:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:11.716 04:23:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:11.716 04:23:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=73542 00:07:11.716 Process raid pid: 73542 00:07:11.716 04:23:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:11.716 04:23:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 73542' 00:07:11.716 04:23:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 73542 00:07:11.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.716 04:23:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 73542 ']' 00:07:11.716 04:23:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.716 04:23:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.716 04:23:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.716 04:23:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.716 04:23:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:11.716 [2024-12-13 04:23:11.605997] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:11.716 [2024-12-13 04:23:11.606133] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:11.975 [2024-12-13 04:23:11.762516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.975 [2024-12-13 04:23:11.806321] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.975 [2024-12-13 04:23:11.886015] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:11.975 [2024-12-13 04:23:11.886059] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:12.545 04:23:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.545 04:23:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:07:12.545 04:23:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:12.545 04:23:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.545 04:23:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:12.545 Base_1 00:07:12.545 04:23:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.545 04:23:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:12.545 04:23:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.545 04:23:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:12.545 Base_2 00:07:12.545 04:23:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.545 04:23:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:12.545 04:23:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.545 04:23:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:12.545 [2024-12-13 04:23:12.491519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:12.545 [2024-12-13 04:23:12.493770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:12.545 [2024-12-13 04:23:12.493835] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:12.545 [2024-12-13 04:23:12.493847] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:12.545 [2024-12-13 04:23:12.494106] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:12.545 [2024-12-13 04:23:12.494248] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:12.545 [2024-12-13 04:23:12.494262] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000001200 00:07:12.545 [2024-12-13 04:23:12.494391] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:12.545 04:23:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.545 04:23:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:12.545 04:23:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:12.545 04:23:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.545 04:23:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:12.545 04:23:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.545 04:23:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:12.545 04:23:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:12.545 04:23:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:12.545 04:23:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:12.545 04:23:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:12.545 04:23:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:12.545 04:23:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:12.545 04:23:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:12.545 04:23:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:12.545 04:23:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:12.545 04:23:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:12.545 04:23:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:12.805 [2024-12-13 04:23:12.739064] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:07:12.805 /dev/nbd0 00:07:12.805 04:23:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:12.805 04:23:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:12.805 04:23:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:12.805 04:23:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:07:12.805 04:23:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:12.805 04:23:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:12.805 04:23:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:12.805 04:23:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:07:12.805 04:23:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:12.805 04:23:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:12.805 04:23:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:12.805 1+0 records in 00:07:12.805 1+0 records out 00:07:12.805 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000585899 s, 7.0 MB/s 00:07:12.805 04:23:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.805 04:23:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:07:12.805 04:23:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.805 04:23:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:12.805 04:23:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:07:12.805 04:23:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:12.805 04:23:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:12.805 04:23:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:12.805 04:23:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:12.805 04:23:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:13.064 04:23:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:13.064 { 00:07:13.064 "nbd_device": "/dev/nbd0", 00:07:13.064 "bdev_name": "raid" 00:07:13.064 } 00:07:13.064 ]' 00:07:13.064 04:23:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:13.064 { 00:07:13.064 "nbd_device": "/dev/nbd0", 00:07:13.064 "bdev_name": "raid" 00:07:13.064 } 00:07:13.064 ]' 00:07:13.064 04:23:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:13.064 04:23:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:13.064 04:23:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:13.064 04:23:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:13.064 04:23:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:13.064 04:23:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:13.064 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:13.064 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:13.064 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:13.064 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:13.064 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:13.064 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:13.064 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:13.064 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:13.064 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:13.064 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:13.064 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:13.064 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:13.064 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:13.064 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:13.064 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:13.064 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:13.064 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:13.064 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:13.064 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:13.324 4096+0 records in 00:07:13.324 4096+0 records out 00:07:13.324 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0337173 s, 62.2 MB/s 00:07:13.324 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:13.324 4096+0 records in 00:07:13.324 4096+0 records out 00:07:13.324 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.179035 s, 11.7 MB/s 00:07:13.324 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:13.324 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:13.324 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:13.324 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:13.324 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:13.324 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:13.324 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:13.324 128+0 records in 00:07:13.324 128+0 records out 00:07:13.324 65536 bytes (66 kB, 64 KiB) copied, 0.00126961 s, 51.6 MB/s 00:07:13.324 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:13.324 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:13.324 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:13.583 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:13.583 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:13.583 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:13.583 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:13.583 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:13.583 2035+0 records in 00:07:13.583 2035+0 records out 00:07:13.583 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0135283 s, 77.0 MB/s 00:07:13.583 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:13.583 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:13.583 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:13.583 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:13.583 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:13.583 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:13.583 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:13.583 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:13.583 456+0 records in 00:07:13.583 456+0 records out 00:07:13.583 233472 bytes (233 kB, 228 KiB) copied, 0.00351521 s, 66.4 MB/s 00:07:13.583 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:13.583 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:13.583 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:13.583 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:13.583 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:13.583 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:13.583 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:13.583 04:23:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:13.583 04:23:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:13.583 04:23:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:13.583 04:23:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:13.583 04:23:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.583 04:23:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:13.843 04:23:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:13.843 [2024-12-13 04:23:13.622400] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:13.843 04:23:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:13.843 04:23:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:13.843 04:23:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.843 04:23:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.843 04:23:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:13.843 04:23:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:07:13.843 04:23:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.843 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:13.843 04:23:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:13.843 04:23:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:13.843 04:23:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:13.843 04:23:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:13.843 04:23:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:14.103 04:23:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:14.103 04:23:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:14.103 04:23:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:14.103 04:23:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:07:14.103 04:23:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:07:14.103 04:23:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:14.103 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:07:14.103 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:14.103 04:23:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 73542 00:07:14.103 04:23:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 73542 ']' 00:07:14.103 04:23:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 73542 00:07:14.103 04:23:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:07:14.103 04:23:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:14.103 04:23:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73542 00:07:14.103 killing process with pid 73542 00:07:14.103 04:23:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:14.103 04:23:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:14.103 04:23:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73542' 00:07:14.103 04:23:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 73542 00:07:14.103 [2024-12-13 04:23:13.936367] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:14.103 [2024-12-13 04:23:13.936519] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:14.103 04:23:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 73542 00:07:14.103 [2024-12-13 04:23:13.936584] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:14.103 [2024-12-13 04:23:13.936597] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid, state offline 00:07:14.103 [2024-12-13 04:23:13.977947] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:14.363 04:23:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:07:14.363 00:07:14.363 real 0m2.789s 00:07:14.363 user 0m3.347s 00:07:14.363 sys 0m0.999s 00:07:14.363 ************************************ 00:07:14.363 END TEST raid_function_test_concat 00:07:14.363 ************************************ 00:07:14.363 04:23:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.363 04:23:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:14.363 04:23:14 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:07:14.363 04:23:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:14.363 04:23:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.363 04:23:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:14.363 ************************************ 00:07:14.363 START TEST raid0_resize_test 00:07:14.363 ************************************ 00:07:14.363 04:23:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:07:14.363 04:23:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:14.363 04:23:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:14.363 04:23:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:14.363 04:23:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:14.622 04:23:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:14.622 04:23:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:14.622 04:23:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:14.622 04:23:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:14.622 04:23:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=73658 00:07:14.622 04:23:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:14.622 04:23:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 73658' 00:07:14.622 Process raid pid: 73658 00:07:14.622 04:23:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 73658 00:07:14.622 04:23:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 73658 ']' 00:07:14.622 04:23:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.622 04:23:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.622 04:23:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.622 04:23:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.622 04:23:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.622 [2024-12-13 04:23:14.463696] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:14.622 [2024-12-13 04:23:14.463821] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:14.622 [2024-12-13 04:23:14.620863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.883 [2024-12-13 04:23:14.658836] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.883 [2024-12-13 04:23:14.733934] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:14.883 [2024-12-13 04:23:14.733974] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:15.452 04:23:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:15.452 04:23:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:15.452 04:23:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:15.452 04:23:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.452 04:23:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.452 Base_1 00:07:15.452 04:23:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.452 04:23:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:15.452 04:23:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.452 04:23:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.452 Base_2 00:07:15.452 04:23:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.452 04:23:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:07:15.452 04:23:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:15.452 04:23:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.452 04:23:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.452 [2024-12-13 04:23:15.307393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:15.452 [2024-12-13 04:23:15.309484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:15.452 [2024-12-13 04:23:15.309539] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:15.452 [2024-12-13 04:23:15.309549] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:15.452 [2024-12-13 04:23:15.309803] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:07:15.452 [2024-12-13 04:23:15.309921] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:15.452 [2024-12-13 04:23:15.309930] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:07:15.452 [2024-12-13 04:23:15.310053] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:15.452 04:23:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.452 04:23:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:15.452 04:23:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.452 04:23:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.452 [2024-12-13 04:23:15.319359] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:15.452 [2024-12-13 04:23:15.319384] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:15.452 true 00:07:15.452 04:23:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.452 04:23:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:15.452 04:23:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:15.452 04:23:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.452 04:23:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.452 [2024-12-13 04:23:15.335534] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:15.452 04:23:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.452 04:23:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:07:15.452 04:23:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:07:15.452 04:23:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:07:15.452 04:23:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:07:15.452 04:23:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:07:15.452 04:23:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:15.452 04:23:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.452 04:23:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.452 [2024-12-13 04:23:15.379225] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:15.452 [2024-12-13 04:23:15.379247] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:15.452 [2024-12-13 04:23:15.379276] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:15.452 true 00:07:15.452 04:23:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.452 04:23:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:15.452 04:23:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:15.452 04:23:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.452 04:23:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.452 [2024-12-13 04:23:15.391403] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:15.452 04:23:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.452 04:23:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:07:15.452 04:23:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:07:15.452 04:23:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:07:15.452 04:23:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:07:15.452 04:23:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:07:15.452 04:23:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 73658 00:07:15.452 04:23:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 73658 ']' 00:07:15.452 04:23:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 73658 00:07:15.452 04:23:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:15.452 04:23:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:15.452 04:23:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73658 00:07:15.712 04:23:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:15.712 killing process with pid 73658 00:07:15.712 04:23:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:15.712 04:23:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73658' 00:07:15.712 04:23:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 73658 00:07:15.712 [2024-12-13 04:23:15.479652] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:15.712 [2024-12-13 04:23:15.479747] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:15.712 [2024-12-13 04:23:15.479796] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:15.712 [2024-12-13 04:23:15.479807] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:07:15.712 04:23:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 73658 00:07:15.712 [2024-12-13 04:23:15.481919] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:15.972 04:23:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:15.972 00:07:15.972 real 0m1.433s 00:07:15.972 user 0m1.535s 00:07:15.972 sys 0m0.361s 00:07:15.972 04:23:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.972 ************************************ 00:07:15.972 END TEST raid0_resize_test 00:07:15.972 ************************************ 00:07:15.972 04:23:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.972 04:23:15 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:07:15.972 04:23:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:15.972 04:23:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.972 04:23:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:15.972 ************************************ 00:07:15.972 START TEST raid1_resize_test 00:07:15.972 ************************************ 00:07:15.972 04:23:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:07:15.972 04:23:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:07:15.972 04:23:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:15.972 04:23:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:15.972 04:23:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:15.972 04:23:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:15.972 04:23:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:15.972 04:23:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:15.972 04:23:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:15.972 04:23:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=73703 00:07:15.972 04:23:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:15.972 04:23:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 73703' 00:07:15.972 Process raid pid: 73703 00:07:15.972 04:23:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 73703 00:07:15.972 04:23:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 73703 ']' 00:07:15.972 04:23:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.972 04:23:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:15.972 04:23:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.972 04:23:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:15.972 04:23:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.972 [2024-12-13 04:23:15.966618] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:15.972 [2024-12-13 04:23:15.966812] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:16.231 [2024-12-13 04:23:16.124707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.231 [2024-12-13 04:23:16.165191] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.231 [2024-12-13 04:23:16.241883] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:16.231 [2024-12-13 04:23:16.242022] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:16.799 04:23:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.799 04:23:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:16.799 04:23:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:16.799 04:23:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.799 04:23:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.799 Base_1 00:07:16.799 04:23:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.799 04:23:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:16.799 04:23:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.799 04:23:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.059 Base_2 00:07:17.059 04:23:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.059 04:23:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:07:17.059 04:23:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:17.059 04:23:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.059 04:23:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.059 [2024-12-13 04:23:16.823667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:17.059 [2024-12-13 04:23:16.825827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:17.059 [2024-12-13 04:23:16.825893] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:17.059 [2024-12-13 04:23:16.825909] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:17.059 [2024-12-13 04:23:16.826171] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:07:17.059 [2024-12-13 04:23:16.826303] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:17.059 [2024-12-13 04:23:16.826312] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:07:17.059 [2024-12-13 04:23:16.826420] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:17.059 04:23:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.059 04:23:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:17.059 04:23:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.059 04:23:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.059 [2024-12-13 04:23:16.835628] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:17.059 [2024-12-13 04:23:16.835736] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:17.059 true 00:07:17.059 04:23:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.059 04:23:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:17.059 04:23:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.059 04:23:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.059 04:23:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:17.059 [2024-12-13 04:23:16.847812] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:17.059 04:23:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.059 04:23:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:07:17.059 04:23:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:07:17.059 04:23:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:07:17.059 04:23:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:07:17.059 04:23:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:07:17.059 04:23:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:17.059 04:23:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.059 04:23:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.059 [2024-12-13 04:23:16.899526] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:17.059 [2024-12-13 04:23:16.899548] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:17.059 [2024-12-13 04:23:16.899576] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:17.059 true 00:07:17.059 04:23:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.059 04:23:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:17.059 04:23:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:17.059 04:23:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.059 04:23:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.059 [2024-12-13 04:23:16.915701] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:17.059 04:23:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.059 04:23:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:07:17.059 04:23:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:07:17.059 04:23:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:07:17.059 04:23:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:07:17.059 04:23:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:07:17.059 04:23:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 73703 00:07:17.059 04:23:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 73703 ']' 00:07:17.059 04:23:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 73703 00:07:17.059 04:23:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:17.059 04:23:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:17.059 04:23:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73703 00:07:17.059 04:23:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:17.059 04:23:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:17.059 04:23:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73703' 00:07:17.059 killing process with pid 73703 00:07:17.059 04:23:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 73703 00:07:17.059 [2024-12-13 04:23:16.997229] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:17.059 [2024-12-13 04:23:16.997379] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:17.059 04:23:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 73703 00:07:17.059 [2024-12-13 04:23:16.997863] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:17.059 [2024-12-13 04:23:16.997934] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:07:17.059 [2024-12-13 04:23:16.999729] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:17.319 04:23:17 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:17.319 00:07:17.319 real 0m1.446s 00:07:17.319 user 0m1.565s 00:07:17.319 sys 0m0.349s 00:07:17.319 04:23:17 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.319 04:23:17 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.319 ************************************ 00:07:17.319 END TEST raid1_resize_test 00:07:17.319 ************************************ 00:07:17.578 04:23:17 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:17.578 04:23:17 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:17.578 04:23:17 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:17.578 04:23:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:17.578 04:23:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.578 04:23:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:17.578 ************************************ 00:07:17.578 START TEST raid_state_function_test 00:07:17.578 ************************************ 00:07:17.578 04:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:07:17.578 04:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:17.578 04:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:17.578 04:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:17.578 04:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:17.578 04:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:17.578 04:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:17.578 04:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:17.578 04:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:17.578 04:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:17.578 04:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:17.578 04:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:17.578 04:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:17.578 04:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:17.578 04:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:17.579 04:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:17.579 04:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:17.579 04:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:17.579 04:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:17.579 04:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:17.579 04:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:17.579 04:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:17.579 04:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:17.579 04:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:17.579 04:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73760 00:07:17.579 04:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:17.579 Process raid pid: 73760 00:07:17.579 04:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73760' 00:07:17.579 04:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73760 00:07:17.579 04:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73760 ']' 00:07:17.579 04:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.579 04:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:17.579 04:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.579 04:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:17.579 04:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.579 [2024-12-13 04:23:17.511289] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:17.579 [2024-12-13 04:23:17.511513] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:17.843 [2024-12-13 04:23:17.669697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.843 [2024-12-13 04:23:17.709085] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.843 [2024-12-13 04:23:17.786288] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:17.843 [2024-12-13 04:23:17.786329] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:18.427 04:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.427 04:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:18.427 04:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:18.427 04:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.427 04:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.427 [2024-12-13 04:23:18.369254] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:18.427 [2024-12-13 04:23:18.369323] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:18.427 [2024-12-13 04:23:18.369334] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:18.427 [2024-12-13 04:23:18.369346] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:18.427 04:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.427 04:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:18.427 04:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:18.427 04:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:18.427 04:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:18.427 04:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:18.427 04:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:18.427 04:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:18.427 04:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:18.427 04:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:18.427 04:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:18.427 04:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.427 04:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:18.427 04:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.427 04:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.427 04:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.427 04:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:18.427 "name": "Existed_Raid", 00:07:18.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:18.427 "strip_size_kb": 64, 00:07:18.427 "state": "configuring", 00:07:18.427 "raid_level": "raid0", 00:07:18.427 "superblock": false, 00:07:18.427 "num_base_bdevs": 2, 00:07:18.427 "num_base_bdevs_discovered": 0, 00:07:18.427 "num_base_bdevs_operational": 2, 00:07:18.427 "base_bdevs_list": [ 00:07:18.427 { 00:07:18.427 "name": "BaseBdev1", 00:07:18.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:18.427 "is_configured": false, 00:07:18.427 "data_offset": 0, 00:07:18.427 "data_size": 0 00:07:18.427 }, 00:07:18.427 { 00:07:18.427 "name": "BaseBdev2", 00:07:18.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:18.427 "is_configured": false, 00:07:18.427 "data_offset": 0, 00:07:18.427 "data_size": 0 00:07:18.427 } 00:07:18.427 ] 00:07:18.427 }' 00:07:18.427 04:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:18.427 04:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.996 04:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:18.996 04:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.996 04:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.996 [2024-12-13 04:23:18.804479] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:18.996 [2024-12-13 04:23:18.804616] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:18.996 04:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.996 04:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:18.996 04:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.996 04:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.996 [2024-12-13 04:23:18.812465] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:18.996 [2024-12-13 04:23:18.812570] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:18.996 [2024-12-13 04:23:18.812597] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:18.996 [2024-12-13 04:23:18.812634] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:18.996 04:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.996 04:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:18.996 04:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.996 04:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.996 [2024-12-13 04:23:18.835239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:18.996 BaseBdev1 00:07:18.996 04:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.996 04:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:18.996 04:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:18.996 04:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:18.996 04:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:18.996 04:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:18.996 04:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:18.996 04:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:18.996 04:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.996 04:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.996 04:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.996 04:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:18.996 04:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.996 04:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.996 [ 00:07:18.996 { 00:07:18.996 "name": "BaseBdev1", 00:07:18.996 "aliases": [ 00:07:18.996 "e769fa9f-df9b-4a85-a310-c3c7ea656471" 00:07:18.996 ], 00:07:18.996 "product_name": "Malloc disk", 00:07:18.996 "block_size": 512, 00:07:18.996 "num_blocks": 65536, 00:07:18.996 "uuid": "e769fa9f-df9b-4a85-a310-c3c7ea656471", 00:07:18.996 "assigned_rate_limits": { 00:07:18.996 "rw_ios_per_sec": 0, 00:07:18.996 "rw_mbytes_per_sec": 0, 00:07:18.996 "r_mbytes_per_sec": 0, 00:07:18.996 "w_mbytes_per_sec": 0 00:07:18.996 }, 00:07:18.996 "claimed": true, 00:07:18.996 "claim_type": "exclusive_write", 00:07:18.996 "zoned": false, 00:07:18.996 "supported_io_types": { 00:07:18.996 "read": true, 00:07:18.996 "write": true, 00:07:18.996 "unmap": true, 00:07:18.996 "flush": true, 00:07:18.996 "reset": true, 00:07:18.996 "nvme_admin": false, 00:07:18.996 "nvme_io": false, 00:07:18.996 "nvme_io_md": false, 00:07:18.997 "write_zeroes": true, 00:07:18.997 "zcopy": true, 00:07:18.997 "get_zone_info": false, 00:07:18.997 "zone_management": false, 00:07:18.997 "zone_append": false, 00:07:18.997 "compare": false, 00:07:18.997 "compare_and_write": false, 00:07:18.997 "abort": true, 00:07:18.997 "seek_hole": false, 00:07:18.997 "seek_data": false, 00:07:18.997 "copy": true, 00:07:18.997 "nvme_iov_md": false 00:07:18.997 }, 00:07:18.997 "memory_domains": [ 00:07:18.997 { 00:07:18.997 "dma_device_id": "system", 00:07:18.997 "dma_device_type": 1 00:07:18.997 }, 00:07:18.997 { 00:07:18.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:18.997 "dma_device_type": 2 00:07:18.997 } 00:07:18.997 ], 00:07:18.997 "driver_specific": {} 00:07:18.997 } 00:07:18.997 ] 00:07:18.997 04:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.997 04:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:18.997 04:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:18.997 04:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:18.997 04:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:18.997 04:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:18.997 04:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:18.997 04:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:18.997 04:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:18.997 04:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:18.997 04:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:18.997 04:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:18.997 04:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.997 04:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:18.997 04:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.997 04:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.997 04:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.997 04:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:18.997 "name": "Existed_Raid", 00:07:18.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:18.997 "strip_size_kb": 64, 00:07:18.997 "state": "configuring", 00:07:18.997 "raid_level": "raid0", 00:07:18.997 "superblock": false, 00:07:18.997 "num_base_bdevs": 2, 00:07:18.997 "num_base_bdevs_discovered": 1, 00:07:18.997 "num_base_bdevs_operational": 2, 00:07:18.997 "base_bdevs_list": [ 00:07:18.997 { 00:07:18.997 "name": "BaseBdev1", 00:07:18.997 "uuid": "e769fa9f-df9b-4a85-a310-c3c7ea656471", 00:07:18.997 "is_configured": true, 00:07:18.997 "data_offset": 0, 00:07:18.997 "data_size": 65536 00:07:18.997 }, 00:07:18.997 { 00:07:18.997 "name": "BaseBdev2", 00:07:18.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:18.997 "is_configured": false, 00:07:18.997 "data_offset": 0, 00:07:18.997 "data_size": 0 00:07:18.997 } 00:07:18.997 ] 00:07:18.997 }' 00:07:18.997 04:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:18.997 04:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.641 04:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:19.641 04:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.641 04:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.641 [2024-12-13 04:23:19.274503] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:19.641 [2024-12-13 04:23:19.274584] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:19.641 04:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.641 04:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:19.641 04:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.641 04:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.641 [2024-12-13 04:23:19.282529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:19.641 [2024-12-13 04:23:19.284592] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:19.641 [2024-12-13 04:23:19.284633] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:19.641 04:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.641 04:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:19.641 04:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:19.641 04:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:19.641 04:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:19.641 04:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:19.641 04:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:19.641 04:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:19.641 04:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:19.641 04:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:19.641 04:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:19.641 04:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:19.641 04:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:19.641 04:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.641 04:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:19.641 04:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.641 04:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.641 04:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.641 04:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.641 "name": "Existed_Raid", 00:07:19.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.641 "strip_size_kb": 64, 00:07:19.641 "state": "configuring", 00:07:19.641 "raid_level": "raid0", 00:07:19.641 "superblock": false, 00:07:19.641 "num_base_bdevs": 2, 00:07:19.641 "num_base_bdevs_discovered": 1, 00:07:19.641 "num_base_bdevs_operational": 2, 00:07:19.641 "base_bdevs_list": [ 00:07:19.641 { 00:07:19.641 "name": "BaseBdev1", 00:07:19.641 "uuid": "e769fa9f-df9b-4a85-a310-c3c7ea656471", 00:07:19.641 "is_configured": true, 00:07:19.641 "data_offset": 0, 00:07:19.641 "data_size": 65536 00:07:19.641 }, 00:07:19.641 { 00:07:19.641 "name": "BaseBdev2", 00:07:19.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.641 "is_configured": false, 00:07:19.641 "data_offset": 0, 00:07:19.641 "data_size": 0 00:07:19.641 } 00:07:19.641 ] 00:07:19.641 }' 00:07:19.641 04:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.641 04:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.641 04:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:19.641 04:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.641 04:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.901 [2024-12-13 04:23:19.670493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:19.901 [2024-12-13 04:23:19.670607] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:19.901 [2024-12-13 04:23:19.670644] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:19.901 [2024-12-13 04:23:19.671000] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:19.901 [2024-12-13 04:23:19.671222] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:19.901 [2024-12-13 04:23:19.671281] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:19.901 [2024-12-13 04:23:19.671561] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:19.901 BaseBdev2 00:07:19.901 04:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.901 04:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:19.901 04:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:19.901 04:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:19.901 04:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:19.901 04:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:19.901 04:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:19.901 04:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:19.901 04:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.901 04:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.901 04:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.901 04:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:19.901 04:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.901 04:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.901 [ 00:07:19.901 { 00:07:19.901 "name": "BaseBdev2", 00:07:19.901 "aliases": [ 00:07:19.901 "20dd03ee-ce1f-49e9-ba63-55349f2546c0" 00:07:19.901 ], 00:07:19.901 "product_name": "Malloc disk", 00:07:19.901 "block_size": 512, 00:07:19.901 "num_blocks": 65536, 00:07:19.901 "uuid": "20dd03ee-ce1f-49e9-ba63-55349f2546c0", 00:07:19.901 "assigned_rate_limits": { 00:07:19.901 "rw_ios_per_sec": 0, 00:07:19.901 "rw_mbytes_per_sec": 0, 00:07:19.901 "r_mbytes_per_sec": 0, 00:07:19.901 "w_mbytes_per_sec": 0 00:07:19.901 }, 00:07:19.901 "claimed": true, 00:07:19.901 "claim_type": "exclusive_write", 00:07:19.901 "zoned": false, 00:07:19.901 "supported_io_types": { 00:07:19.901 "read": true, 00:07:19.901 "write": true, 00:07:19.901 "unmap": true, 00:07:19.901 "flush": true, 00:07:19.901 "reset": true, 00:07:19.901 "nvme_admin": false, 00:07:19.901 "nvme_io": false, 00:07:19.901 "nvme_io_md": false, 00:07:19.901 "write_zeroes": true, 00:07:19.901 "zcopy": true, 00:07:19.901 "get_zone_info": false, 00:07:19.901 "zone_management": false, 00:07:19.901 "zone_append": false, 00:07:19.901 "compare": false, 00:07:19.901 "compare_and_write": false, 00:07:19.901 "abort": true, 00:07:19.901 "seek_hole": false, 00:07:19.901 "seek_data": false, 00:07:19.901 "copy": true, 00:07:19.901 "nvme_iov_md": false 00:07:19.901 }, 00:07:19.901 "memory_domains": [ 00:07:19.901 { 00:07:19.901 "dma_device_id": "system", 00:07:19.901 "dma_device_type": 1 00:07:19.901 }, 00:07:19.901 { 00:07:19.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.901 "dma_device_type": 2 00:07:19.901 } 00:07:19.901 ], 00:07:19.901 "driver_specific": {} 00:07:19.901 } 00:07:19.901 ] 00:07:19.901 04:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.901 04:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:19.901 04:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:19.901 04:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:19.901 04:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:19.901 04:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:19.901 04:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:19.901 04:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:19.901 04:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:19.901 04:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:19.901 04:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:19.901 04:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:19.901 04:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:19.901 04:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:19.901 04:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.901 04:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:19.901 04:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.901 04:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.901 04:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.901 04:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.901 "name": "Existed_Raid", 00:07:19.901 "uuid": "47bc5b16-850d-4c41-89af-79d688df8c9e", 00:07:19.901 "strip_size_kb": 64, 00:07:19.901 "state": "online", 00:07:19.901 "raid_level": "raid0", 00:07:19.901 "superblock": false, 00:07:19.901 "num_base_bdevs": 2, 00:07:19.901 "num_base_bdevs_discovered": 2, 00:07:19.901 "num_base_bdevs_operational": 2, 00:07:19.901 "base_bdevs_list": [ 00:07:19.901 { 00:07:19.901 "name": "BaseBdev1", 00:07:19.901 "uuid": "e769fa9f-df9b-4a85-a310-c3c7ea656471", 00:07:19.901 "is_configured": true, 00:07:19.901 "data_offset": 0, 00:07:19.901 "data_size": 65536 00:07:19.901 }, 00:07:19.901 { 00:07:19.901 "name": "BaseBdev2", 00:07:19.901 "uuid": "20dd03ee-ce1f-49e9-ba63-55349f2546c0", 00:07:19.901 "is_configured": true, 00:07:19.901 "data_offset": 0, 00:07:19.901 "data_size": 65536 00:07:19.901 } 00:07:19.901 ] 00:07:19.901 }' 00:07:19.901 04:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.901 04:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.161 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:20.161 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:20.161 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:20.161 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:20.161 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:20.161 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:20.161 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:20.161 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:20.161 04:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.161 04:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.161 [2024-12-13 04:23:20.169905] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:20.420 04:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.420 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:20.420 "name": "Existed_Raid", 00:07:20.420 "aliases": [ 00:07:20.420 "47bc5b16-850d-4c41-89af-79d688df8c9e" 00:07:20.420 ], 00:07:20.420 "product_name": "Raid Volume", 00:07:20.420 "block_size": 512, 00:07:20.420 "num_blocks": 131072, 00:07:20.420 "uuid": "47bc5b16-850d-4c41-89af-79d688df8c9e", 00:07:20.420 "assigned_rate_limits": { 00:07:20.420 "rw_ios_per_sec": 0, 00:07:20.420 "rw_mbytes_per_sec": 0, 00:07:20.420 "r_mbytes_per_sec": 0, 00:07:20.420 "w_mbytes_per_sec": 0 00:07:20.420 }, 00:07:20.420 "claimed": false, 00:07:20.420 "zoned": false, 00:07:20.420 "supported_io_types": { 00:07:20.420 "read": true, 00:07:20.420 "write": true, 00:07:20.420 "unmap": true, 00:07:20.420 "flush": true, 00:07:20.420 "reset": true, 00:07:20.420 "nvme_admin": false, 00:07:20.420 "nvme_io": false, 00:07:20.420 "nvme_io_md": false, 00:07:20.420 "write_zeroes": true, 00:07:20.420 "zcopy": false, 00:07:20.420 "get_zone_info": false, 00:07:20.420 "zone_management": false, 00:07:20.420 "zone_append": false, 00:07:20.420 "compare": false, 00:07:20.420 "compare_and_write": false, 00:07:20.420 "abort": false, 00:07:20.420 "seek_hole": false, 00:07:20.420 "seek_data": false, 00:07:20.420 "copy": false, 00:07:20.420 "nvme_iov_md": false 00:07:20.420 }, 00:07:20.420 "memory_domains": [ 00:07:20.420 { 00:07:20.420 "dma_device_id": "system", 00:07:20.420 "dma_device_type": 1 00:07:20.420 }, 00:07:20.420 { 00:07:20.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.420 "dma_device_type": 2 00:07:20.420 }, 00:07:20.420 { 00:07:20.420 "dma_device_id": "system", 00:07:20.420 "dma_device_type": 1 00:07:20.420 }, 00:07:20.420 { 00:07:20.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.420 "dma_device_type": 2 00:07:20.420 } 00:07:20.420 ], 00:07:20.420 "driver_specific": { 00:07:20.420 "raid": { 00:07:20.420 "uuid": "47bc5b16-850d-4c41-89af-79d688df8c9e", 00:07:20.420 "strip_size_kb": 64, 00:07:20.420 "state": "online", 00:07:20.420 "raid_level": "raid0", 00:07:20.420 "superblock": false, 00:07:20.420 "num_base_bdevs": 2, 00:07:20.420 "num_base_bdevs_discovered": 2, 00:07:20.420 "num_base_bdevs_operational": 2, 00:07:20.420 "base_bdevs_list": [ 00:07:20.420 { 00:07:20.420 "name": "BaseBdev1", 00:07:20.420 "uuid": "e769fa9f-df9b-4a85-a310-c3c7ea656471", 00:07:20.420 "is_configured": true, 00:07:20.420 "data_offset": 0, 00:07:20.420 "data_size": 65536 00:07:20.420 }, 00:07:20.420 { 00:07:20.420 "name": "BaseBdev2", 00:07:20.420 "uuid": "20dd03ee-ce1f-49e9-ba63-55349f2546c0", 00:07:20.420 "is_configured": true, 00:07:20.420 "data_offset": 0, 00:07:20.420 "data_size": 65536 00:07:20.420 } 00:07:20.420 ] 00:07:20.420 } 00:07:20.420 } 00:07:20.420 }' 00:07:20.420 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:20.420 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:20.420 BaseBdev2' 00:07:20.420 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:20.420 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:20.420 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:20.420 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:20.420 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:20.420 04:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.420 04:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.420 04:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.420 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:20.420 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:20.420 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:20.420 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:20.420 04:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.420 04:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.420 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:20.420 04:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.420 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:20.420 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:20.420 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:20.420 04:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.420 04:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.420 [2024-12-13 04:23:20.405370] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:20.420 [2024-12-13 04:23:20.405455] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:20.420 [2024-12-13 04:23:20.405550] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:20.420 04:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.420 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:20.420 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:20.420 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:20.420 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:20.420 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:20.420 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:20.420 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:20.420 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:20.420 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:20.420 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:20.420 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:20.420 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:20.420 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:20.421 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:20.421 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:20.421 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.421 04:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.421 04:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.679 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:20.679 04:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.679 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:20.679 "name": "Existed_Raid", 00:07:20.679 "uuid": "47bc5b16-850d-4c41-89af-79d688df8c9e", 00:07:20.679 "strip_size_kb": 64, 00:07:20.679 "state": "offline", 00:07:20.679 "raid_level": "raid0", 00:07:20.679 "superblock": false, 00:07:20.679 "num_base_bdevs": 2, 00:07:20.679 "num_base_bdevs_discovered": 1, 00:07:20.679 "num_base_bdevs_operational": 1, 00:07:20.679 "base_bdevs_list": [ 00:07:20.679 { 00:07:20.679 "name": null, 00:07:20.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:20.679 "is_configured": false, 00:07:20.679 "data_offset": 0, 00:07:20.679 "data_size": 65536 00:07:20.679 }, 00:07:20.679 { 00:07:20.679 "name": "BaseBdev2", 00:07:20.679 "uuid": "20dd03ee-ce1f-49e9-ba63-55349f2546c0", 00:07:20.679 "is_configured": true, 00:07:20.679 "data_offset": 0, 00:07:20.679 "data_size": 65536 00:07:20.679 } 00:07:20.679 ] 00:07:20.679 }' 00:07:20.680 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:20.680 04:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.939 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:20.939 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:20.939 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:20.939 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.939 04:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.939 04:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.939 04:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.939 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:20.939 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:20.939 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:20.939 04:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.939 04:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.939 [2024-12-13 04:23:20.884938] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:20.939 [2024-12-13 04:23:20.885070] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:20.939 04:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.939 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:20.939 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:20.939 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:20.939 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.939 04:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.939 04:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.939 04:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.198 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:21.198 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:21.198 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:21.198 04:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73760 00:07:21.198 04:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73760 ']' 00:07:21.198 04:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73760 00:07:21.198 04:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:21.198 04:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:21.198 04:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73760 00:07:21.198 killing process with pid 73760 00:07:21.198 04:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:21.198 04:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:21.198 04:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73760' 00:07:21.198 04:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73760 00:07:21.198 [2024-12-13 04:23:21.002218] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:21.198 04:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73760 00:07:21.198 [2024-12-13 04:23:21.003773] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:21.458 ************************************ 00:07:21.458 END TEST raid_state_function_test 00:07:21.458 ************************************ 00:07:21.458 04:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:21.458 00:07:21.458 real 0m3.917s 00:07:21.458 user 0m6.030s 00:07:21.458 sys 0m0.838s 00:07:21.458 04:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.458 04:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.458 04:23:21 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:21.458 04:23:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:21.458 04:23:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.458 04:23:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:21.458 ************************************ 00:07:21.458 START TEST raid_state_function_test_sb 00:07:21.458 ************************************ 00:07:21.458 04:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:07:21.458 04:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:21.458 04:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:21.458 04:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:21.458 04:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:21.458 04:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:21.458 04:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:21.458 04:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:21.458 04:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:21.458 04:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:21.458 04:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:21.458 04:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:21.458 04:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:21.458 04:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:21.458 04:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:21.458 04:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:21.458 04:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:21.458 04:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:21.458 04:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:21.458 04:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:21.458 04:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:21.458 04:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:21.458 04:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:21.458 04:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:21.458 Process raid pid: 74002 00:07:21.458 04:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74002 00:07:21.458 04:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:21.458 04:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74002' 00:07:21.458 04:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74002 00:07:21.458 04:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 74002 ']' 00:07:21.458 04:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.458 04:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.458 04:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.458 04:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.458 04:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.717 [2024-12-13 04:23:21.495458] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:21.717 [2024-12-13 04:23:21.495670] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.717 [2024-12-13 04:23:21.653029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.717 [2024-12-13 04:23:21.693535] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.976 [2024-12-13 04:23:21.769389] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:21.976 [2024-12-13 04:23:21.769560] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:22.544 04:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.544 04:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:22.544 04:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:22.544 04:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.544 04:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.544 [2024-12-13 04:23:22.307738] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:22.544 [2024-12-13 04:23:22.307876] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:22.544 [2024-12-13 04:23:22.307913] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:22.544 [2024-12-13 04:23:22.307939] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:22.544 04:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.544 04:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:22.544 04:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:22.544 04:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:22.544 04:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:22.544 04:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:22.544 04:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:22.544 04:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.544 04:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.544 04:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.544 04:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.544 04:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:22.544 04:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.544 04:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.544 04:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.544 04:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.544 04:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.544 "name": "Existed_Raid", 00:07:22.544 "uuid": "e0afdd87-d004-4113-9172-fb0617b6b114", 00:07:22.544 "strip_size_kb": 64, 00:07:22.544 "state": "configuring", 00:07:22.544 "raid_level": "raid0", 00:07:22.544 "superblock": true, 00:07:22.544 "num_base_bdevs": 2, 00:07:22.544 "num_base_bdevs_discovered": 0, 00:07:22.544 "num_base_bdevs_operational": 2, 00:07:22.544 "base_bdevs_list": [ 00:07:22.544 { 00:07:22.544 "name": "BaseBdev1", 00:07:22.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.544 "is_configured": false, 00:07:22.544 "data_offset": 0, 00:07:22.544 "data_size": 0 00:07:22.544 }, 00:07:22.544 { 00:07:22.544 "name": "BaseBdev2", 00:07:22.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.544 "is_configured": false, 00:07:22.544 "data_offset": 0, 00:07:22.544 "data_size": 0 00:07:22.544 } 00:07:22.544 ] 00:07:22.544 }' 00:07:22.544 04:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.544 04:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.804 04:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:22.804 04:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.804 04:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.804 [2024-12-13 04:23:22.770883] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:22.804 [2024-12-13 04:23:22.770972] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:22.804 04:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.804 04:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:22.804 04:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.804 04:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.804 [2024-12-13 04:23:22.782878] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:22.804 [2024-12-13 04:23:22.782971] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:22.804 [2024-12-13 04:23:22.782997] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:22.804 [2024-12-13 04:23:22.783033] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:22.804 04:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.804 04:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:22.804 04:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.804 04:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.804 [2024-12-13 04:23:22.809882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:22.804 BaseBdev1 00:07:22.804 04:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.804 04:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:22.804 04:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:22.804 04:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:22.804 04:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:22.804 04:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:22.804 04:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:22.804 04:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:22.804 04:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.804 04:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.064 04:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.064 04:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:23.064 04:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.064 04:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.064 [ 00:07:23.064 { 00:07:23.064 "name": "BaseBdev1", 00:07:23.064 "aliases": [ 00:07:23.064 "a51e1023-7f58-4b4a-8de3-f99370837c4c" 00:07:23.064 ], 00:07:23.064 "product_name": "Malloc disk", 00:07:23.064 "block_size": 512, 00:07:23.064 "num_blocks": 65536, 00:07:23.064 "uuid": "a51e1023-7f58-4b4a-8de3-f99370837c4c", 00:07:23.064 "assigned_rate_limits": { 00:07:23.064 "rw_ios_per_sec": 0, 00:07:23.064 "rw_mbytes_per_sec": 0, 00:07:23.064 "r_mbytes_per_sec": 0, 00:07:23.064 "w_mbytes_per_sec": 0 00:07:23.064 }, 00:07:23.064 "claimed": true, 00:07:23.064 "claim_type": "exclusive_write", 00:07:23.064 "zoned": false, 00:07:23.064 "supported_io_types": { 00:07:23.064 "read": true, 00:07:23.064 "write": true, 00:07:23.064 "unmap": true, 00:07:23.064 "flush": true, 00:07:23.064 "reset": true, 00:07:23.064 "nvme_admin": false, 00:07:23.064 "nvme_io": false, 00:07:23.064 "nvme_io_md": false, 00:07:23.064 "write_zeroes": true, 00:07:23.064 "zcopy": true, 00:07:23.064 "get_zone_info": false, 00:07:23.064 "zone_management": false, 00:07:23.064 "zone_append": false, 00:07:23.064 "compare": false, 00:07:23.064 "compare_and_write": false, 00:07:23.064 "abort": true, 00:07:23.064 "seek_hole": false, 00:07:23.064 "seek_data": false, 00:07:23.064 "copy": true, 00:07:23.064 "nvme_iov_md": false 00:07:23.064 }, 00:07:23.064 "memory_domains": [ 00:07:23.064 { 00:07:23.064 "dma_device_id": "system", 00:07:23.064 "dma_device_type": 1 00:07:23.064 }, 00:07:23.064 { 00:07:23.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.064 "dma_device_type": 2 00:07:23.064 } 00:07:23.064 ], 00:07:23.064 "driver_specific": {} 00:07:23.064 } 00:07:23.064 ] 00:07:23.064 04:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.064 04:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:23.064 04:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:23.064 04:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:23.064 04:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:23.064 04:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:23.064 04:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:23.064 04:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:23.064 04:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:23.064 04:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:23.064 04:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.064 04:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.064 04:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.064 04:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:23.064 04:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.064 04:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.064 04:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.064 04:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:23.064 "name": "Existed_Raid", 00:07:23.064 "uuid": "32336dc0-25f0-438f-981d-f6c3d563832e", 00:07:23.064 "strip_size_kb": 64, 00:07:23.064 "state": "configuring", 00:07:23.064 "raid_level": "raid0", 00:07:23.064 "superblock": true, 00:07:23.064 "num_base_bdevs": 2, 00:07:23.064 "num_base_bdevs_discovered": 1, 00:07:23.064 "num_base_bdevs_operational": 2, 00:07:23.064 "base_bdevs_list": [ 00:07:23.064 { 00:07:23.064 "name": "BaseBdev1", 00:07:23.064 "uuid": "a51e1023-7f58-4b4a-8de3-f99370837c4c", 00:07:23.064 "is_configured": true, 00:07:23.064 "data_offset": 2048, 00:07:23.064 "data_size": 63488 00:07:23.064 }, 00:07:23.064 { 00:07:23.064 "name": "BaseBdev2", 00:07:23.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:23.064 "is_configured": false, 00:07:23.064 "data_offset": 0, 00:07:23.064 "data_size": 0 00:07:23.064 } 00:07:23.064 ] 00:07:23.064 }' 00:07:23.064 04:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:23.064 04:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.324 04:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:23.324 04:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.324 04:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.324 [2024-12-13 04:23:23.317082] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:23.324 [2024-12-13 04:23:23.317141] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:23.324 04:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.324 04:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:23.324 04:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.324 04:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.324 [2024-12-13 04:23:23.325093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:23.324 [2024-12-13 04:23:23.327184] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:23.324 [2024-12-13 04:23:23.327229] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:23.324 04:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.324 04:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:23.324 04:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:23.324 04:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:23.324 04:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:23.324 04:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:23.324 04:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:23.324 04:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:23.324 04:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:23.324 04:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:23.324 04:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:23.324 04:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.324 04:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.324 04:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.324 04:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:23.324 04:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.324 04:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.583 04:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.583 04:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:23.583 "name": "Existed_Raid", 00:07:23.583 "uuid": "9ae21834-fded-4e09-a46b-70d07c38a44d", 00:07:23.583 "strip_size_kb": 64, 00:07:23.583 "state": "configuring", 00:07:23.583 "raid_level": "raid0", 00:07:23.583 "superblock": true, 00:07:23.583 "num_base_bdevs": 2, 00:07:23.583 "num_base_bdevs_discovered": 1, 00:07:23.583 "num_base_bdevs_operational": 2, 00:07:23.583 "base_bdevs_list": [ 00:07:23.583 { 00:07:23.583 "name": "BaseBdev1", 00:07:23.583 "uuid": "a51e1023-7f58-4b4a-8de3-f99370837c4c", 00:07:23.583 "is_configured": true, 00:07:23.583 "data_offset": 2048, 00:07:23.583 "data_size": 63488 00:07:23.583 }, 00:07:23.583 { 00:07:23.583 "name": "BaseBdev2", 00:07:23.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:23.583 "is_configured": false, 00:07:23.583 "data_offset": 0, 00:07:23.583 "data_size": 0 00:07:23.583 } 00:07:23.583 ] 00:07:23.583 }' 00:07:23.583 04:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:23.583 04:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.842 04:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:23.842 04:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.842 04:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.842 BaseBdev2 00:07:23.842 [2024-12-13 04:23:23.796986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:23.842 [2024-12-13 04:23:23.797198] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:23.842 [2024-12-13 04:23:23.797220] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:23.842 [2024-12-13 04:23:23.797508] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:23.842 [2024-12-13 04:23:23.797661] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:23.842 [2024-12-13 04:23:23.797677] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:23.842 [2024-12-13 04:23:23.797805] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:23.842 04:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.842 04:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:23.842 04:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:23.842 04:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:23.842 04:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:23.842 04:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:23.842 04:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:23.842 04:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:23.842 04:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.843 04:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.843 04:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.843 04:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:23.843 04:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.843 04:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.843 [ 00:07:23.843 { 00:07:23.843 "name": "BaseBdev2", 00:07:23.843 "aliases": [ 00:07:23.843 "b410d15e-c872-4156-88e4-19c91db68eed" 00:07:23.843 ], 00:07:23.843 "product_name": "Malloc disk", 00:07:23.843 "block_size": 512, 00:07:23.843 "num_blocks": 65536, 00:07:23.843 "uuid": "b410d15e-c872-4156-88e4-19c91db68eed", 00:07:23.843 "assigned_rate_limits": { 00:07:23.843 "rw_ios_per_sec": 0, 00:07:23.843 "rw_mbytes_per_sec": 0, 00:07:23.843 "r_mbytes_per_sec": 0, 00:07:23.843 "w_mbytes_per_sec": 0 00:07:23.843 }, 00:07:23.843 "claimed": true, 00:07:23.843 "claim_type": "exclusive_write", 00:07:23.843 "zoned": false, 00:07:23.843 "supported_io_types": { 00:07:23.843 "read": true, 00:07:23.843 "write": true, 00:07:23.843 "unmap": true, 00:07:23.843 "flush": true, 00:07:23.843 "reset": true, 00:07:23.843 "nvme_admin": false, 00:07:23.843 "nvme_io": false, 00:07:23.843 "nvme_io_md": false, 00:07:23.843 "write_zeroes": true, 00:07:23.843 "zcopy": true, 00:07:23.843 "get_zone_info": false, 00:07:23.843 "zone_management": false, 00:07:23.843 "zone_append": false, 00:07:23.843 "compare": false, 00:07:23.843 "compare_and_write": false, 00:07:23.843 "abort": true, 00:07:23.843 "seek_hole": false, 00:07:23.843 "seek_data": false, 00:07:23.843 "copy": true, 00:07:23.843 "nvme_iov_md": false 00:07:23.843 }, 00:07:23.843 "memory_domains": [ 00:07:23.843 { 00:07:23.843 "dma_device_id": "system", 00:07:23.843 "dma_device_type": 1 00:07:23.843 }, 00:07:23.843 { 00:07:23.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.843 "dma_device_type": 2 00:07:23.843 } 00:07:23.843 ], 00:07:23.843 "driver_specific": {} 00:07:23.843 } 00:07:23.843 ] 00:07:23.843 04:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.843 04:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:23.843 04:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:23.843 04:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:23.843 04:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:23.843 04:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:23.843 04:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:23.843 04:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:23.843 04:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:23.843 04:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:23.843 04:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:23.843 04:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:23.843 04:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.843 04:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.843 04:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.843 04:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:23.843 04:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.843 04:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.102 04:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.102 04:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:24.102 "name": "Existed_Raid", 00:07:24.102 "uuid": "9ae21834-fded-4e09-a46b-70d07c38a44d", 00:07:24.102 "strip_size_kb": 64, 00:07:24.102 "state": "online", 00:07:24.102 "raid_level": "raid0", 00:07:24.102 "superblock": true, 00:07:24.102 "num_base_bdevs": 2, 00:07:24.102 "num_base_bdevs_discovered": 2, 00:07:24.102 "num_base_bdevs_operational": 2, 00:07:24.102 "base_bdevs_list": [ 00:07:24.102 { 00:07:24.102 "name": "BaseBdev1", 00:07:24.102 "uuid": "a51e1023-7f58-4b4a-8de3-f99370837c4c", 00:07:24.102 "is_configured": true, 00:07:24.102 "data_offset": 2048, 00:07:24.102 "data_size": 63488 00:07:24.102 }, 00:07:24.102 { 00:07:24.102 "name": "BaseBdev2", 00:07:24.102 "uuid": "b410d15e-c872-4156-88e4-19c91db68eed", 00:07:24.102 "is_configured": true, 00:07:24.102 "data_offset": 2048, 00:07:24.102 "data_size": 63488 00:07:24.102 } 00:07:24.102 ] 00:07:24.102 }' 00:07:24.102 04:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:24.102 04:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.362 04:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:24.362 04:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:24.362 04:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:24.362 04:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:24.362 04:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:24.362 04:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:24.362 04:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:24.362 04:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.362 04:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.362 04:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:24.362 [2024-12-13 04:23:24.316486] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:24.362 04:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.362 04:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:24.362 "name": "Existed_Raid", 00:07:24.362 "aliases": [ 00:07:24.362 "9ae21834-fded-4e09-a46b-70d07c38a44d" 00:07:24.362 ], 00:07:24.362 "product_name": "Raid Volume", 00:07:24.362 "block_size": 512, 00:07:24.362 "num_blocks": 126976, 00:07:24.362 "uuid": "9ae21834-fded-4e09-a46b-70d07c38a44d", 00:07:24.362 "assigned_rate_limits": { 00:07:24.362 "rw_ios_per_sec": 0, 00:07:24.362 "rw_mbytes_per_sec": 0, 00:07:24.362 "r_mbytes_per_sec": 0, 00:07:24.362 "w_mbytes_per_sec": 0 00:07:24.362 }, 00:07:24.362 "claimed": false, 00:07:24.362 "zoned": false, 00:07:24.362 "supported_io_types": { 00:07:24.362 "read": true, 00:07:24.362 "write": true, 00:07:24.362 "unmap": true, 00:07:24.362 "flush": true, 00:07:24.362 "reset": true, 00:07:24.362 "nvme_admin": false, 00:07:24.362 "nvme_io": false, 00:07:24.362 "nvme_io_md": false, 00:07:24.362 "write_zeroes": true, 00:07:24.362 "zcopy": false, 00:07:24.362 "get_zone_info": false, 00:07:24.362 "zone_management": false, 00:07:24.362 "zone_append": false, 00:07:24.362 "compare": false, 00:07:24.362 "compare_and_write": false, 00:07:24.362 "abort": false, 00:07:24.362 "seek_hole": false, 00:07:24.362 "seek_data": false, 00:07:24.362 "copy": false, 00:07:24.362 "nvme_iov_md": false 00:07:24.362 }, 00:07:24.362 "memory_domains": [ 00:07:24.362 { 00:07:24.362 "dma_device_id": "system", 00:07:24.362 "dma_device_type": 1 00:07:24.362 }, 00:07:24.362 { 00:07:24.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.362 "dma_device_type": 2 00:07:24.362 }, 00:07:24.362 { 00:07:24.362 "dma_device_id": "system", 00:07:24.362 "dma_device_type": 1 00:07:24.362 }, 00:07:24.362 { 00:07:24.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.362 "dma_device_type": 2 00:07:24.362 } 00:07:24.362 ], 00:07:24.362 "driver_specific": { 00:07:24.362 "raid": { 00:07:24.362 "uuid": "9ae21834-fded-4e09-a46b-70d07c38a44d", 00:07:24.362 "strip_size_kb": 64, 00:07:24.362 "state": "online", 00:07:24.362 "raid_level": "raid0", 00:07:24.362 "superblock": true, 00:07:24.362 "num_base_bdevs": 2, 00:07:24.362 "num_base_bdevs_discovered": 2, 00:07:24.362 "num_base_bdevs_operational": 2, 00:07:24.362 "base_bdevs_list": [ 00:07:24.362 { 00:07:24.362 "name": "BaseBdev1", 00:07:24.362 "uuid": "a51e1023-7f58-4b4a-8de3-f99370837c4c", 00:07:24.362 "is_configured": true, 00:07:24.362 "data_offset": 2048, 00:07:24.362 "data_size": 63488 00:07:24.362 }, 00:07:24.362 { 00:07:24.362 "name": "BaseBdev2", 00:07:24.362 "uuid": "b410d15e-c872-4156-88e4-19c91db68eed", 00:07:24.362 "is_configured": true, 00:07:24.362 "data_offset": 2048, 00:07:24.362 "data_size": 63488 00:07:24.362 } 00:07:24.362 ] 00:07:24.362 } 00:07:24.362 } 00:07:24.362 }' 00:07:24.362 04:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:24.622 04:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:24.622 BaseBdev2' 00:07:24.622 04:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:24.622 04:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:24.622 04:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:24.622 04:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:24.622 04:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:24.622 04:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.622 04:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.622 04:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.622 04:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:24.622 04:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:24.622 04:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:24.622 04:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:24.622 04:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:24.622 04:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.622 04:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.622 04:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.622 04:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:24.622 04:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:24.622 04:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:24.622 04:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.622 04:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.622 [2024-12-13 04:23:24.531898] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:24.622 [2024-12-13 04:23:24.531937] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:24.622 [2024-12-13 04:23:24.531990] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:24.622 04:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.622 04:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:24.622 04:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:24.622 04:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:24.622 04:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:24.622 04:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:24.622 04:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:24.622 04:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:24.622 04:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:24.622 04:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:24.622 04:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:24.622 04:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:24.622 04:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:24.622 04:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:24.622 04:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:24.622 04:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:24.622 04:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.622 04:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:24.622 04:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.622 04:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.622 04:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.622 04:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:24.622 "name": "Existed_Raid", 00:07:24.622 "uuid": "9ae21834-fded-4e09-a46b-70d07c38a44d", 00:07:24.622 "strip_size_kb": 64, 00:07:24.622 "state": "offline", 00:07:24.622 "raid_level": "raid0", 00:07:24.622 "superblock": true, 00:07:24.622 "num_base_bdevs": 2, 00:07:24.622 "num_base_bdevs_discovered": 1, 00:07:24.622 "num_base_bdevs_operational": 1, 00:07:24.622 "base_bdevs_list": [ 00:07:24.622 { 00:07:24.622 "name": null, 00:07:24.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:24.622 "is_configured": false, 00:07:24.622 "data_offset": 0, 00:07:24.622 "data_size": 63488 00:07:24.622 }, 00:07:24.622 { 00:07:24.622 "name": "BaseBdev2", 00:07:24.622 "uuid": "b410d15e-c872-4156-88e4-19c91db68eed", 00:07:24.622 "is_configured": true, 00:07:24.622 "data_offset": 2048, 00:07:24.622 "data_size": 63488 00:07:24.622 } 00:07:24.622 ] 00:07:24.622 }' 00:07:24.622 04:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:24.622 04:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.192 04:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:25.192 04:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:25.192 04:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.192 04:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:25.192 04:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.192 04:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.192 04:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.192 04:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:25.192 04:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:25.192 04:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:25.192 04:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.192 04:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.192 [2024-12-13 04:23:25.055894] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:25.192 [2024-12-13 04:23:25.056032] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:25.192 04:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.193 04:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:25.193 04:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:25.193 04:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.193 04:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:25.193 04:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.193 04:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.193 04:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.193 04:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:25.193 04:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:25.193 04:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:25.193 04:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74002 00:07:25.193 04:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 74002 ']' 00:07:25.193 04:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 74002 00:07:25.193 04:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:25.193 04:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:25.193 04:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74002 00:07:25.193 04:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:25.193 04:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:25.193 04:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74002' 00:07:25.193 killing process with pid 74002 00:07:25.193 04:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 74002 00:07:25.193 [2024-12-13 04:23:25.171896] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:25.193 04:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 74002 00:07:25.193 [2024-12-13 04:23:25.173455] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:25.760 ************************************ 00:07:25.760 END TEST raid_state_function_test_sb 00:07:25.760 ************************************ 00:07:25.760 04:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:25.760 00:07:25.760 real 0m4.095s 00:07:25.760 user 0m6.364s 00:07:25.760 sys 0m0.860s 00:07:25.760 04:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.760 04:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.760 04:23:25 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:25.760 04:23:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:25.760 04:23:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.761 04:23:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:25.761 ************************************ 00:07:25.761 START TEST raid_superblock_test 00:07:25.761 ************************************ 00:07:25.761 04:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:07:25.761 04:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:25.761 04:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:25.761 04:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:25.761 04:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:25.761 04:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:25.761 04:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:25.761 04:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:25.761 04:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:25.761 04:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:25.761 04:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:25.761 04:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:25.761 04:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:25.761 04:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:25.761 04:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:25.761 04:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:25.761 04:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:25.761 04:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74243 00:07:25.761 04:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:25.761 04:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74243 00:07:25.761 04:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74243 ']' 00:07:25.761 04:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.761 04:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.761 04:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.761 04:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.761 04:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.761 [2024-12-13 04:23:25.656782] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:25.761 [2024-12-13 04:23:25.656979] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74243 ] 00:07:26.021 [2024-12-13 04:23:25.814924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.021 [2024-12-13 04:23:25.856214] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.021 [2024-12-13 04:23:25.932232] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:26.021 [2024-12-13 04:23:25.932276] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:26.589 04:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.589 04:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:26.589 04:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:26.589 04:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:26.589 04:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:26.589 04:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:26.589 04:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:26.589 04:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:26.589 04:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:26.589 04:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:26.589 04:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:26.589 04:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.589 04:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.589 malloc1 00:07:26.589 04:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.589 04:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:26.589 04:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.589 04:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.589 [2024-12-13 04:23:26.501238] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:26.589 [2024-12-13 04:23:26.501395] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:26.589 [2024-12-13 04:23:26.501448] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:07:26.589 [2024-12-13 04:23:26.501497] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:26.590 [2024-12-13 04:23:26.503879] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:26.590 [2024-12-13 04:23:26.503960] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:26.590 pt1 00:07:26.590 04:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.590 04:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:26.590 04:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:26.590 04:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:26.590 04:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:26.590 04:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:26.590 04:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:26.590 04:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:26.590 04:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:26.590 04:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:26.590 04:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.590 04:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.590 malloc2 00:07:26.590 04:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.590 04:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:26.590 04:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.590 04:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.590 [2024-12-13 04:23:26.539611] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:26.590 [2024-12-13 04:23:26.539721] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:26.590 [2024-12-13 04:23:26.539762] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:26.590 [2024-12-13 04:23:26.539793] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:26.590 [2024-12-13 04:23:26.542177] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:26.590 [2024-12-13 04:23:26.542254] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:26.590 pt2 00:07:26.590 04:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.590 04:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:26.590 04:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:26.590 04:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:26.590 04:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.590 04:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.590 [2024-12-13 04:23:26.551654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:26.590 [2024-12-13 04:23:26.553816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:26.590 [2024-12-13 04:23:26.554007] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:26.590 [2024-12-13 04:23:26.554059] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:26.590 [2024-12-13 04:23:26.554327] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:26.590 [2024-12-13 04:23:26.554503] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:26.590 [2024-12-13 04:23:26.554516] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:07:26.590 [2024-12-13 04:23:26.554646] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:26.590 04:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.590 04:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:26.590 04:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:26.590 04:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:26.590 04:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:26.590 04:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:26.590 04:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:26.590 04:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.590 04:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.590 04:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.590 04:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.590 04:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.590 04:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:26.590 04:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.590 04:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.590 04:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.849 04:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.849 "name": "raid_bdev1", 00:07:26.849 "uuid": "a1b753cd-f444-4d9e-b5df-1a19a652aac8", 00:07:26.849 "strip_size_kb": 64, 00:07:26.849 "state": "online", 00:07:26.849 "raid_level": "raid0", 00:07:26.849 "superblock": true, 00:07:26.849 "num_base_bdevs": 2, 00:07:26.849 "num_base_bdevs_discovered": 2, 00:07:26.849 "num_base_bdevs_operational": 2, 00:07:26.849 "base_bdevs_list": [ 00:07:26.849 { 00:07:26.849 "name": "pt1", 00:07:26.849 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:26.849 "is_configured": true, 00:07:26.849 "data_offset": 2048, 00:07:26.849 "data_size": 63488 00:07:26.849 }, 00:07:26.849 { 00:07:26.849 "name": "pt2", 00:07:26.849 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:26.849 "is_configured": true, 00:07:26.849 "data_offset": 2048, 00:07:26.849 "data_size": 63488 00:07:26.849 } 00:07:26.849 ] 00:07:26.849 }' 00:07:26.849 04:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.849 04:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.108 04:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:27.108 04:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:27.108 04:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:27.108 04:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:27.108 04:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:27.108 04:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:27.108 04:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:27.108 04:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:27.108 04:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.108 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.108 [2024-12-13 04:23:27.007098] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:27.108 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.108 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:27.108 "name": "raid_bdev1", 00:07:27.108 "aliases": [ 00:07:27.108 "a1b753cd-f444-4d9e-b5df-1a19a652aac8" 00:07:27.108 ], 00:07:27.108 "product_name": "Raid Volume", 00:07:27.108 "block_size": 512, 00:07:27.108 "num_blocks": 126976, 00:07:27.108 "uuid": "a1b753cd-f444-4d9e-b5df-1a19a652aac8", 00:07:27.108 "assigned_rate_limits": { 00:07:27.108 "rw_ios_per_sec": 0, 00:07:27.108 "rw_mbytes_per_sec": 0, 00:07:27.108 "r_mbytes_per_sec": 0, 00:07:27.108 "w_mbytes_per_sec": 0 00:07:27.108 }, 00:07:27.108 "claimed": false, 00:07:27.108 "zoned": false, 00:07:27.108 "supported_io_types": { 00:07:27.108 "read": true, 00:07:27.108 "write": true, 00:07:27.108 "unmap": true, 00:07:27.108 "flush": true, 00:07:27.108 "reset": true, 00:07:27.108 "nvme_admin": false, 00:07:27.108 "nvme_io": false, 00:07:27.108 "nvme_io_md": false, 00:07:27.108 "write_zeroes": true, 00:07:27.108 "zcopy": false, 00:07:27.108 "get_zone_info": false, 00:07:27.108 "zone_management": false, 00:07:27.108 "zone_append": false, 00:07:27.108 "compare": false, 00:07:27.109 "compare_and_write": false, 00:07:27.109 "abort": false, 00:07:27.109 "seek_hole": false, 00:07:27.109 "seek_data": false, 00:07:27.109 "copy": false, 00:07:27.109 "nvme_iov_md": false 00:07:27.109 }, 00:07:27.109 "memory_domains": [ 00:07:27.109 { 00:07:27.109 "dma_device_id": "system", 00:07:27.109 "dma_device_type": 1 00:07:27.109 }, 00:07:27.109 { 00:07:27.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:27.109 "dma_device_type": 2 00:07:27.109 }, 00:07:27.109 { 00:07:27.109 "dma_device_id": "system", 00:07:27.109 "dma_device_type": 1 00:07:27.109 }, 00:07:27.109 { 00:07:27.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:27.109 "dma_device_type": 2 00:07:27.109 } 00:07:27.109 ], 00:07:27.109 "driver_specific": { 00:07:27.109 "raid": { 00:07:27.109 "uuid": "a1b753cd-f444-4d9e-b5df-1a19a652aac8", 00:07:27.109 "strip_size_kb": 64, 00:07:27.109 "state": "online", 00:07:27.109 "raid_level": "raid0", 00:07:27.109 "superblock": true, 00:07:27.109 "num_base_bdevs": 2, 00:07:27.109 "num_base_bdevs_discovered": 2, 00:07:27.109 "num_base_bdevs_operational": 2, 00:07:27.109 "base_bdevs_list": [ 00:07:27.109 { 00:07:27.109 "name": "pt1", 00:07:27.109 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:27.109 "is_configured": true, 00:07:27.109 "data_offset": 2048, 00:07:27.109 "data_size": 63488 00:07:27.109 }, 00:07:27.109 { 00:07:27.109 "name": "pt2", 00:07:27.109 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:27.109 "is_configured": true, 00:07:27.109 "data_offset": 2048, 00:07:27.109 "data_size": 63488 00:07:27.109 } 00:07:27.109 ] 00:07:27.109 } 00:07:27.109 } 00:07:27.109 }' 00:07:27.109 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:27.109 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:27.109 pt2' 00:07:27.109 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:27.369 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:27.369 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:27.369 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:27.369 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.369 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.369 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:27.369 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.369 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:27.369 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:27.369 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:27.369 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:27.369 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:27.369 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.369 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.369 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.369 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:27.369 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:27.369 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:27.369 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:27.369 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.369 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.369 [2024-12-13 04:23:27.246613] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:27.369 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.369 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a1b753cd-f444-4d9e-b5df-1a19a652aac8 00:07:27.369 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a1b753cd-f444-4d9e-b5df-1a19a652aac8 ']' 00:07:27.369 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:27.369 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.369 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.369 [2024-12-13 04:23:27.294297] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:27.369 [2024-12-13 04:23:27.294373] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:27.369 [2024-12-13 04:23:27.294482] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:27.369 [2024-12-13 04:23:27.294584] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:27.369 [2024-12-13 04:23:27.294636] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:07:27.369 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.369 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.369 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.369 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:27.369 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.369 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.369 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:27.369 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:27.369 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:27.369 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:27.369 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.369 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.369 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.369 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:27.369 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:27.369 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.369 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.629 [2024-12-13 04:23:27.454034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:27.629 [2024-12-13 04:23:27.456138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:27.629 [2024-12-13 04:23:27.456225] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:27.629 [2024-12-13 04:23:27.456270] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:27.629 [2024-12-13 04:23:27.456286] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:27.629 [2024-12-13 04:23:27.456300] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:07:27.629 request: 00:07:27.629 { 00:07:27.629 "name": "raid_bdev1", 00:07:27.629 "raid_level": "raid0", 00:07:27.629 "base_bdevs": [ 00:07:27.629 "malloc1", 00:07:27.629 "malloc2" 00:07:27.629 ], 00:07:27.629 "strip_size_kb": 64, 00:07:27.629 "superblock": false, 00:07:27.629 "method": "bdev_raid_create", 00:07:27.629 "req_id": 1 00:07:27.629 } 00:07:27.629 Got JSON-RPC error response 00:07:27.629 response: 00:07:27.629 { 00:07:27.629 "code": -17, 00:07:27.629 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:27.629 } 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.629 [2024-12-13 04:23:27.517924] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:27.629 [2024-12-13 04:23:27.518037] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:27.629 [2024-12-13 04:23:27.518078] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:27.629 [2024-12-13 04:23:27.518115] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:27.629 [2024-12-13 04:23:27.520412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:27.629 [2024-12-13 04:23:27.520521] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:27.629 [2024-12-13 04:23:27.520626] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:27.629 [2024-12-13 04:23:27.520687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:27.629 pt1 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.629 "name": "raid_bdev1", 00:07:27.629 "uuid": "a1b753cd-f444-4d9e-b5df-1a19a652aac8", 00:07:27.629 "strip_size_kb": 64, 00:07:27.629 "state": "configuring", 00:07:27.629 "raid_level": "raid0", 00:07:27.629 "superblock": true, 00:07:27.629 "num_base_bdevs": 2, 00:07:27.629 "num_base_bdevs_discovered": 1, 00:07:27.629 "num_base_bdevs_operational": 2, 00:07:27.629 "base_bdevs_list": [ 00:07:27.629 { 00:07:27.629 "name": "pt1", 00:07:27.629 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:27.629 "is_configured": true, 00:07:27.629 "data_offset": 2048, 00:07:27.629 "data_size": 63488 00:07:27.629 }, 00:07:27.629 { 00:07:27.629 "name": null, 00:07:27.629 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:27.629 "is_configured": false, 00:07:27.629 "data_offset": 2048, 00:07:27.629 "data_size": 63488 00:07:27.629 } 00:07:27.629 ] 00:07:27.629 }' 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.629 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.198 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:28.198 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:28.198 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:28.198 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:28.198 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.198 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.198 [2024-12-13 04:23:27.937171] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:28.198 [2024-12-13 04:23:27.937222] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:28.198 [2024-12-13 04:23:27.937239] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:28.198 [2024-12-13 04:23:27.937248] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:28.198 [2024-12-13 04:23:27.937585] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:28.198 [2024-12-13 04:23:27.937601] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:28.198 [2024-12-13 04:23:27.937653] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:28.198 [2024-12-13 04:23:27.937701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:28.198 [2024-12-13 04:23:27.937778] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:28.198 [2024-12-13 04:23:27.937809] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:28.198 [2024-12-13 04:23:27.938060] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:07:28.198 [2024-12-13 04:23:27.938163] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:28.198 [2024-12-13 04:23:27.938178] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:28.198 [2024-12-13 04:23:27.938265] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:28.198 pt2 00:07:28.198 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.198 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:28.198 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:28.198 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:28.198 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:28.198 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:28.198 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:28.198 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:28.198 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:28.198 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:28.198 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:28.198 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:28.198 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:28.198 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.198 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:28.198 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.198 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.198 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.198 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.198 "name": "raid_bdev1", 00:07:28.198 "uuid": "a1b753cd-f444-4d9e-b5df-1a19a652aac8", 00:07:28.198 "strip_size_kb": 64, 00:07:28.198 "state": "online", 00:07:28.198 "raid_level": "raid0", 00:07:28.198 "superblock": true, 00:07:28.198 "num_base_bdevs": 2, 00:07:28.198 "num_base_bdevs_discovered": 2, 00:07:28.198 "num_base_bdevs_operational": 2, 00:07:28.198 "base_bdevs_list": [ 00:07:28.198 { 00:07:28.198 "name": "pt1", 00:07:28.198 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:28.198 "is_configured": true, 00:07:28.198 "data_offset": 2048, 00:07:28.198 "data_size": 63488 00:07:28.198 }, 00:07:28.198 { 00:07:28.198 "name": "pt2", 00:07:28.198 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:28.198 "is_configured": true, 00:07:28.198 "data_offset": 2048, 00:07:28.198 "data_size": 63488 00:07:28.198 } 00:07:28.198 ] 00:07:28.198 }' 00:07:28.198 04:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.198 04:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.458 04:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:28.458 04:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:28.458 04:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:28.458 04:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:28.458 04:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:28.458 04:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:28.458 04:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:28.458 04:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:28.458 04:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.458 04:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.458 [2024-12-13 04:23:28.432584] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:28.458 04:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.718 04:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:28.718 "name": "raid_bdev1", 00:07:28.718 "aliases": [ 00:07:28.718 "a1b753cd-f444-4d9e-b5df-1a19a652aac8" 00:07:28.718 ], 00:07:28.718 "product_name": "Raid Volume", 00:07:28.718 "block_size": 512, 00:07:28.718 "num_blocks": 126976, 00:07:28.718 "uuid": "a1b753cd-f444-4d9e-b5df-1a19a652aac8", 00:07:28.718 "assigned_rate_limits": { 00:07:28.718 "rw_ios_per_sec": 0, 00:07:28.718 "rw_mbytes_per_sec": 0, 00:07:28.718 "r_mbytes_per_sec": 0, 00:07:28.718 "w_mbytes_per_sec": 0 00:07:28.718 }, 00:07:28.718 "claimed": false, 00:07:28.718 "zoned": false, 00:07:28.718 "supported_io_types": { 00:07:28.718 "read": true, 00:07:28.718 "write": true, 00:07:28.718 "unmap": true, 00:07:28.718 "flush": true, 00:07:28.718 "reset": true, 00:07:28.718 "nvme_admin": false, 00:07:28.718 "nvme_io": false, 00:07:28.718 "nvme_io_md": false, 00:07:28.718 "write_zeroes": true, 00:07:28.718 "zcopy": false, 00:07:28.718 "get_zone_info": false, 00:07:28.718 "zone_management": false, 00:07:28.718 "zone_append": false, 00:07:28.718 "compare": false, 00:07:28.718 "compare_and_write": false, 00:07:28.718 "abort": false, 00:07:28.718 "seek_hole": false, 00:07:28.718 "seek_data": false, 00:07:28.718 "copy": false, 00:07:28.718 "nvme_iov_md": false 00:07:28.718 }, 00:07:28.718 "memory_domains": [ 00:07:28.718 { 00:07:28.718 "dma_device_id": "system", 00:07:28.718 "dma_device_type": 1 00:07:28.718 }, 00:07:28.718 { 00:07:28.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.718 "dma_device_type": 2 00:07:28.718 }, 00:07:28.718 { 00:07:28.718 "dma_device_id": "system", 00:07:28.718 "dma_device_type": 1 00:07:28.718 }, 00:07:28.718 { 00:07:28.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.718 "dma_device_type": 2 00:07:28.718 } 00:07:28.718 ], 00:07:28.718 "driver_specific": { 00:07:28.718 "raid": { 00:07:28.718 "uuid": "a1b753cd-f444-4d9e-b5df-1a19a652aac8", 00:07:28.718 "strip_size_kb": 64, 00:07:28.718 "state": "online", 00:07:28.718 "raid_level": "raid0", 00:07:28.718 "superblock": true, 00:07:28.718 "num_base_bdevs": 2, 00:07:28.718 "num_base_bdevs_discovered": 2, 00:07:28.718 "num_base_bdevs_operational": 2, 00:07:28.718 "base_bdevs_list": [ 00:07:28.718 { 00:07:28.718 "name": "pt1", 00:07:28.718 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:28.718 "is_configured": true, 00:07:28.718 "data_offset": 2048, 00:07:28.718 "data_size": 63488 00:07:28.718 }, 00:07:28.718 { 00:07:28.718 "name": "pt2", 00:07:28.718 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:28.718 "is_configured": true, 00:07:28.718 "data_offset": 2048, 00:07:28.718 "data_size": 63488 00:07:28.718 } 00:07:28.718 ] 00:07:28.718 } 00:07:28.718 } 00:07:28.718 }' 00:07:28.718 04:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:28.718 04:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:28.718 pt2' 00:07:28.718 04:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:28.718 04:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:28.718 04:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:28.718 04:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:28.718 04:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.718 04:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.718 04:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:28.718 04:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.718 04:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:28.718 04:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:28.718 04:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:28.718 04:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:28.718 04:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:28.718 04:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.718 04:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.718 04:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.718 04:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:28.718 04:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:28.718 04:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:28.718 04:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:28.718 04:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.718 04:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.718 [2024-12-13 04:23:28.684135] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:28.718 04:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.718 04:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a1b753cd-f444-4d9e-b5df-1a19a652aac8 '!=' a1b753cd-f444-4d9e-b5df-1a19a652aac8 ']' 00:07:28.718 04:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:28.718 04:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:28.718 04:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:28.718 04:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74243 00:07:28.718 04:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74243 ']' 00:07:28.718 04:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74243 00:07:28.718 04:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:28.718 04:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:28.718 04:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74243 00:07:28.978 killing process with pid 74243 00:07:28.978 04:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:28.978 04:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:28.978 04:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74243' 00:07:28.978 04:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74243 00:07:28.978 [2024-12-13 04:23:28.747091] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:28.978 [2024-12-13 04:23:28.747160] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:28.978 [2024-12-13 04:23:28.747208] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:28.978 [2024-12-13 04:23:28.747216] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:28.978 04:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74243 00:07:28.978 [2024-12-13 04:23:28.789039] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:29.238 04:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:29.238 00:07:29.238 real 0m3.552s 00:07:29.238 user 0m5.338s 00:07:29.238 sys 0m0.817s 00:07:29.238 04:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.238 04:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.238 ************************************ 00:07:29.238 END TEST raid_superblock_test 00:07:29.238 ************************************ 00:07:29.238 04:23:29 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:29.238 04:23:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:29.238 04:23:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.238 04:23:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:29.238 ************************************ 00:07:29.238 START TEST raid_read_error_test 00:07:29.238 ************************************ 00:07:29.238 04:23:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:07:29.238 04:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:29.238 04:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:29.238 04:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:29.238 04:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:29.238 04:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:29.238 04:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:29.238 04:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:29.238 04:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:29.238 04:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:29.238 04:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:29.238 04:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:29.238 04:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:29.238 04:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:29.238 04:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:29.238 04:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:29.238 04:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:29.238 04:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:29.238 04:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:29.238 04:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:29.238 04:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:29.238 04:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:29.238 04:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:29.238 04:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.YLxyNeNP24 00:07:29.238 04:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74438 00:07:29.238 04:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74438 00:07:29.238 04:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:29.238 04:23:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 74438 ']' 00:07:29.238 04:23:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.238 04:23:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:29.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.238 04:23:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.238 04:23:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:29.238 04:23:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.498 [2024-12-13 04:23:29.288588] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:29.498 [2024-12-13 04:23:29.288733] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74438 ] 00:07:29.498 [2024-12-13 04:23:29.443011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.498 [2024-12-13 04:23:29.479998] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.757 [2024-12-13 04:23:29.555782] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:29.757 [2024-12-13 04:23:29.555824] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:30.330 04:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:30.330 04:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:30.330 04:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:30.330 04:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:30.330 04:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.330 04:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.330 BaseBdev1_malloc 00:07:30.330 04:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.330 04:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:30.330 04:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.330 04:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.330 true 00:07:30.330 04:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.330 04:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:30.330 04:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.330 04:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.330 [2024-12-13 04:23:30.144344] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:30.330 [2024-12-13 04:23:30.144432] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:30.330 [2024-12-13 04:23:30.144471] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:30.330 [2024-12-13 04:23:30.144481] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:30.330 [2024-12-13 04:23:30.146934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:30.331 [2024-12-13 04:23:30.146968] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:30.331 BaseBdev1 00:07:30.331 04:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.331 04:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:30.331 04:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:30.331 04:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.331 04:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.331 BaseBdev2_malloc 00:07:30.331 04:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.331 04:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:30.331 04:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.331 04:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.331 true 00:07:30.331 04:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.331 04:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:30.331 04:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.331 04:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.331 [2024-12-13 04:23:30.190884] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:30.331 [2024-12-13 04:23:30.190936] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:30.331 [2024-12-13 04:23:30.190959] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:30.331 [2024-12-13 04:23:30.190977] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:30.331 [2024-12-13 04:23:30.193521] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:30.331 [2024-12-13 04:23:30.193558] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:30.331 BaseBdev2 00:07:30.331 04:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.331 04:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:30.331 04:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.331 04:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.331 [2024-12-13 04:23:30.202919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:30.331 [2024-12-13 04:23:30.205137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:30.331 [2024-12-13 04:23:30.205340] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:30.331 [2024-12-13 04:23:30.205353] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:30.331 [2024-12-13 04:23:30.205627] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:07:30.331 [2024-12-13 04:23:30.205804] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:30.331 [2024-12-13 04:23:30.205831] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:30.331 [2024-12-13 04:23:30.205972] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:30.331 04:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.331 04:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:30.331 04:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:30.331 04:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:30.331 04:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:30.331 04:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:30.331 04:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:30.331 04:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:30.331 04:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:30.331 04:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:30.331 04:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:30.331 04:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.331 04:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.331 04:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:30.331 04:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.331 04:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.331 04:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:30.331 "name": "raid_bdev1", 00:07:30.331 "uuid": "6e2939ea-7149-4fa2-ba9a-d3aab35ad2b0", 00:07:30.331 "strip_size_kb": 64, 00:07:30.331 "state": "online", 00:07:30.331 "raid_level": "raid0", 00:07:30.331 "superblock": true, 00:07:30.331 "num_base_bdevs": 2, 00:07:30.331 "num_base_bdevs_discovered": 2, 00:07:30.331 "num_base_bdevs_operational": 2, 00:07:30.331 "base_bdevs_list": [ 00:07:30.331 { 00:07:30.331 "name": "BaseBdev1", 00:07:30.331 "uuid": "faf7c639-dab4-555e-8a28-99a1da65776b", 00:07:30.331 "is_configured": true, 00:07:30.331 "data_offset": 2048, 00:07:30.331 "data_size": 63488 00:07:30.331 }, 00:07:30.331 { 00:07:30.331 "name": "BaseBdev2", 00:07:30.331 "uuid": "d5e3d06b-f3d7-590e-9ba5-c4a5354ff380", 00:07:30.331 "is_configured": true, 00:07:30.331 "data_offset": 2048, 00:07:30.331 "data_size": 63488 00:07:30.331 } 00:07:30.331 ] 00:07:30.331 }' 00:07:30.331 04:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:30.331 04:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.901 04:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:30.901 04:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:30.901 [2024-12-13 04:23:30.762428] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:07:31.850 04:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:31.850 04:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.850 04:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.850 04:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.850 04:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:31.850 04:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:31.850 04:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:31.850 04:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:31.850 04:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:31.850 04:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:31.850 04:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:31.850 04:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:31.850 04:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:31.850 04:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.850 04:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.850 04:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.850 04:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.850 04:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.850 04:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.850 04:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:31.850 04:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.850 04:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.850 04:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.850 "name": "raid_bdev1", 00:07:31.850 "uuid": "6e2939ea-7149-4fa2-ba9a-d3aab35ad2b0", 00:07:31.850 "strip_size_kb": 64, 00:07:31.850 "state": "online", 00:07:31.850 "raid_level": "raid0", 00:07:31.850 "superblock": true, 00:07:31.850 "num_base_bdevs": 2, 00:07:31.850 "num_base_bdevs_discovered": 2, 00:07:31.850 "num_base_bdevs_operational": 2, 00:07:31.850 "base_bdevs_list": [ 00:07:31.850 { 00:07:31.850 "name": "BaseBdev1", 00:07:31.850 "uuid": "faf7c639-dab4-555e-8a28-99a1da65776b", 00:07:31.850 "is_configured": true, 00:07:31.850 "data_offset": 2048, 00:07:31.850 "data_size": 63488 00:07:31.850 }, 00:07:31.850 { 00:07:31.850 "name": "BaseBdev2", 00:07:31.850 "uuid": "d5e3d06b-f3d7-590e-9ba5-c4a5354ff380", 00:07:31.850 "is_configured": true, 00:07:31.850 "data_offset": 2048, 00:07:31.850 "data_size": 63488 00:07:31.850 } 00:07:31.850 ] 00:07:31.850 }' 00:07:31.850 04:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.850 04:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.127 04:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:32.127 04:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.127 04:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.386 [2024-12-13 04:23:32.146354] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:32.386 [2024-12-13 04:23:32.146409] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:32.386 [2024-12-13 04:23:32.149034] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:32.386 [2024-12-13 04:23:32.149106] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:32.386 [2024-12-13 04:23:32.149146] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:32.386 [2024-12-13 04:23:32.149157] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:32.386 { 00:07:32.386 "results": [ 00:07:32.386 { 00:07:32.386 "job": "raid_bdev1", 00:07:32.386 "core_mask": "0x1", 00:07:32.386 "workload": "randrw", 00:07:32.386 "percentage": 50, 00:07:32.386 "status": "finished", 00:07:32.386 "queue_depth": 1, 00:07:32.387 "io_size": 131072, 00:07:32.387 "runtime": 1.384702, 00:07:32.387 "iops": 15682.79673171556, 00:07:32.387 "mibps": 1960.349591464445, 00:07:32.387 "io_failed": 1, 00:07:32.387 "io_timeout": 0, 00:07:32.387 "avg_latency_us": 88.79100344587472, 00:07:32.387 "min_latency_us": 25.3764192139738, 00:07:32.387 "max_latency_us": 1294.9799126637554 00:07:32.387 } 00:07:32.387 ], 00:07:32.387 "core_count": 1 00:07:32.387 } 00:07:32.387 04:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.387 04:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74438 00:07:32.387 04:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 74438 ']' 00:07:32.387 04:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 74438 00:07:32.387 04:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:32.387 04:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:32.387 04:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74438 00:07:32.387 04:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:32.387 04:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:32.387 killing process with pid 74438 00:07:32.387 04:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74438' 00:07:32.387 04:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 74438 00:07:32.387 [2024-12-13 04:23:32.197946] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:32.387 04:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 74438 00:07:32.387 [2024-12-13 04:23:32.224646] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:32.646 04:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.YLxyNeNP24 00:07:32.646 04:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:32.646 04:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:32.646 04:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:07:32.646 04:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:32.646 04:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:32.646 04:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:32.646 04:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:07:32.646 00:07:32.646 real 0m3.363s 00:07:32.646 user 0m4.204s 00:07:32.646 sys 0m0.586s 00:07:32.647 04:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.647 04:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.647 ************************************ 00:07:32.647 END TEST raid_read_error_test 00:07:32.647 ************************************ 00:07:32.647 04:23:32 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:32.647 04:23:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:32.647 04:23:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.647 04:23:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:32.647 ************************************ 00:07:32.647 START TEST raid_write_error_test 00:07:32.647 ************************************ 00:07:32.647 04:23:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:07:32.647 04:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:32.647 04:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:32.647 04:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:32.647 04:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:32.647 04:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:32.647 04:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:32.647 04:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:32.647 04:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:32.647 04:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:32.647 04:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:32.647 04:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:32.647 04:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:32.647 04:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:32.647 04:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:32.647 04:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:32.647 04:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:32.647 04:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:32.647 04:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:32.647 04:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:32.647 04:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:32.647 04:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:32.647 04:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:32.647 04:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.KEZ4C6XmTW 00:07:32.647 04:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74573 00:07:32.647 04:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:32.647 04:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74573 00:07:32.647 04:23:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 74573 ']' 00:07:32.647 04:23:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.647 04:23:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.647 04:23:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.647 04:23:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.647 04:23:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.907 [2024-12-13 04:23:32.720423] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:32.907 [2024-12-13 04:23:32.720562] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74573 ] 00:07:32.907 [2024-12-13 04:23:32.877224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.907 [2024-12-13 04:23:32.915953] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.167 [2024-12-13 04:23:32.991816] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:33.168 [2024-12-13 04:23:32.991861] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:33.737 04:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:33.737 04:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:33.737 04:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:33.737 04:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:33.737 04:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.737 04:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.737 BaseBdev1_malloc 00:07:33.737 04:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.737 04:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:33.737 04:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.737 04:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.737 true 00:07:33.737 04:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.737 04:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:33.737 04:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.737 04:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.737 [2024-12-13 04:23:33.585304] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:33.737 [2024-12-13 04:23:33.585367] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:33.737 [2024-12-13 04:23:33.585391] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:33.737 [2024-12-13 04:23:33.585400] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:33.737 [2024-12-13 04:23:33.587833] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:33.737 [2024-12-13 04:23:33.587872] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:33.737 BaseBdev1 00:07:33.737 04:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.737 04:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:33.737 04:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:33.737 04:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.737 04:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.737 BaseBdev2_malloc 00:07:33.738 04:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.738 04:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:33.738 04:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.738 04:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.738 true 00:07:33.738 04:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.738 04:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:33.738 04:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.738 04:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.738 [2024-12-13 04:23:33.631623] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:33.738 [2024-12-13 04:23:33.631670] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:33.738 [2024-12-13 04:23:33.631690] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:33.738 [2024-12-13 04:23:33.631707] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:33.738 [2024-12-13 04:23:33.634106] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:33.738 [2024-12-13 04:23:33.634140] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:33.738 BaseBdev2 00:07:33.738 04:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.738 04:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:33.738 04:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.738 04:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.738 [2024-12-13 04:23:33.643645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:33.738 [2024-12-13 04:23:33.645872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:33.738 [2024-12-13 04:23:33.646056] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:33.738 [2024-12-13 04:23:33.646081] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:33.738 [2024-12-13 04:23:33.646353] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:07:33.738 [2024-12-13 04:23:33.646546] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:33.738 [2024-12-13 04:23:33.646566] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:33.738 [2024-12-13 04:23:33.646703] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:33.738 04:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.738 04:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:33.738 04:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:33.738 04:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:33.738 04:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:33.738 04:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.738 04:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:33.738 04:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.738 04:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.738 04:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.738 04:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.738 04:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.738 04:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:33.738 04:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.738 04:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.738 04:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.738 04:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.738 "name": "raid_bdev1", 00:07:33.738 "uuid": "6ce7fa09-c0f0-4119-ae7e-a4d89d108b56", 00:07:33.738 "strip_size_kb": 64, 00:07:33.738 "state": "online", 00:07:33.738 "raid_level": "raid0", 00:07:33.738 "superblock": true, 00:07:33.738 "num_base_bdevs": 2, 00:07:33.738 "num_base_bdevs_discovered": 2, 00:07:33.738 "num_base_bdevs_operational": 2, 00:07:33.738 "base_bdevs_list": [ 00:07:33.738 { 00:07:33.738 "name": "BaseBdev1", 00:07:33.738 "uuid": "f74e7d78-ae27-5149-9573-358ca64e9051", 00:07:33.738 "is_configured": true, 00:07:33.738 "data_offset": 2048, 00:07:33.738 "data_size": 63488 00:07:33.738 }, 00:07:33.738 { 00:07:33.738 "name": "BaseBdev2", 00:07:33.738 "uuid": "a943b838-3431-5381-93a0-19973a60b143", 00:07:33.738 "is_configured": true, 00:07:33.738 "data_offset": 2048, 00:07:33.738 "data_size": 63488 00:07:33.738 } 00:07:33.738 ] 00:07:33.738 }' 00:07:33.738 04:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.738 04:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.308 04:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:34.308 04:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:34.308 [2024-12-13 04:23:34.175179] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:07:35.251 04:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:35.251 04:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.251 04:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.251 04:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.251 04:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:35.251 04:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:35.252 04:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:35.252 04:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:35.252 04:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:35.252 04:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:35.252 04:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:35.252 04:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:35.252 04:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:35.252 04:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:35.252 04:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:35.252 04:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:35.252 04:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:35.252 04:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.252 04:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.252 04:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:35.252 04:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.252 04:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.252 04:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:35.252 "name": "raid_bdev1", 00:07:35.252 "uuid": "6ce7fa09-c0f0-4119-ae7e-a4d89d108b56", 00:07:35.252 "strip_size_kb": 64, 00:07:35.252 "state": "online", 00:07:35.252 "raid_level": "raid0", 00:07:35.252 "superblock": true, 00:07:35.252 "num_base_bdevs": 2, 00:07:35.252 "num_base_bdevs_discovered": 2, 00:07:35.252 "num_base_bdevs_operational": 2, 00:07:35.252 "base_bdevs_list": [ 00:07:35.252 { 00:07:35.252 "name": "BaseBdev1", 00:07:35.252 "uuid": "f74e7d78-ae27-5149-9573-358ca64e9051", 00:07:35.252 "is_configured": true, 00:07:35.252 "data_offset": 2048, 00:07:35.252 "data_size": 63488 00:07:35.252 }, 00:07:35.252 { 00:07:35.252 "name": "BaseBdev2", 00:07:35.252 "uuid": "a943b838-3431-5381-93a0-19973a60b143", 00:07:35.252 "is_configured": true, 00:07:35.252 "data_offset": 2048, 00:07:35.252 "data_size": 63488 00:07:35.252 } 00:07:35.252 ] 00:07:35.252 }' 00:07:35.252 04:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:35.252 04:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.516 04:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:35.516 04:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.516 04:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.516 [2024-12-13 04:23:35.495157] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:35.516 [2024-12-13 04:23:35.495205] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:35.516 [2024-12-13 04:23:35.497793] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:35.516 [2024-12-13 04:23:35.497839] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:35.516 [2024-12-13 04:23:35.497879] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:35.516 [2024-12-13 04:23:35.497890] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:35.516 { 00:07:35.516 "results": [ 00:07:35.516 { 00:07:35.516 "job": "raid_bdev1", 00:07:35.516 "core_mask": "0x1", 00:07:35.516 "workload": "randrw", 00:07:35.516 "percentage": 50, 00:07:35.516 "status": "finished", 00:07:35.516 "queue_depth": 1, 00:07:35.516 "io_size": 131072, 00:07:35.516 "runtime": 1.320628, 00:07:35.516 "iops": 15610.754883282802, 00:07:35.516 "mibps": 1951.3443604103502, 00:07:35.516 "io_failed": 1, 00:07:35.516 "io_timeout": 0, 00:07:35.516 "avg_latency_us": 89.27098640139471, 00:07:35.516 "min_latency_us": 25.2646288209607, 00:07:35.516 "max_latency_us": 1345.0620087336245 00:07:35.516 } 00:07:35.516 ], 00:07:35.516 "core_count": 1 00:07:35.516 } 00:07:35.516 04:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.516 04:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74573 00:07:35.516 04:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 74573 ']' 00:07:35.516 04:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 74573 00:07:35.516 04:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:35.516 04:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:35.516 04:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74573 00:07:35.775 04:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:35.775 04:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:35.775 killing process with pid 74573 00:07:35.775 04:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74573' 00:07:35.775 04:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 74573 00:07:35.775 [2024-12-13 04:23:35.545375] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:35.776 04:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 74573 00:07:35.776 [2024-12-13 04:23:35.574035] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:36.036 04:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.KEZ4C6XmTW 00:07:36.036 04:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:36.036 04:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:36.036 04:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:07:36.036 04:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:36.036 04:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:36.036 04:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:36.036 04:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:07:36.036 00:07:36.036 real 0m3.279s 00:07:36.036 user 0m4.051s 00:07:36.036 sys 0m0.566s 00:07:36.036 04:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.036 04:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.036 ************************************ 00:07:36.036 END TEST raid_write_error_test 00:07:36.036 ************************************ 00:07:36.036 04:23:35 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:36.036 04:23:35 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:36.036 04:23:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:36.036 04:23:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.036 04:23:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:36.036 ************************************ 00:07:36.036 START TEST raid_state_function_test 00:07:36.036 ************************************ 00:07:36.036 04:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:07:36.036 04:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:36.036 04:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:36.036 04:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:36.036 04:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:36.036 04:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:36.036 04:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:36.036 04:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:36.036 04:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:36.036 04:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:36.036 04:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:36.036 04:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:36.036 04:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:36.036 04:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:36.036 04:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:36.036 04:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:36.036 04:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:36.036 04:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:36.036 04:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:36.036 04:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:36.036 04:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:36.036 04:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:36.036 04:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:36.036 04:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:36.036 04:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=74700 00:07:36.036 04:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:36.036 04:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74700' 00:07:36.036 Process raid pid: 74700 00:07:36.036 04:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 74700 00:07:36.036 04:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 74700 ']' 00:07:36.036 04:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.036 04:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:36.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.036 04:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.036 04:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:36.036 04:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.296 [2024-12-13 04:23:36.069772] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:36.296 [2024-12-13 04:23:36.069887] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:36.296 [2024-12-13 04:23:36.223317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.296 [2024-12-13 04:23:36.262485] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.556 [2024-12-13 04:23:36.338788] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:36.556 [2024-12-13 04:23:36.338833] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:37.126 04:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.126 04:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:37.126 04:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:37.126 04:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.126 04:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.126 [2024-12-13 04:23:36.897481] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:37.126 [2024-12-13 04:23:36.897539] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:37.126 [2024-12-13 04:23:36.897555] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:37.126 [2024-12-13 04:23:36.897567] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:37.126 04:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.126 04:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:37.126 04:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:37.126 04:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:37.126 04:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:37.126 04:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.126 04:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.126 04:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.126 04:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.126 04:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.126 04:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.126 04:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.126 04:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.126 04:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.126 04:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.126 04:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.126 04:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.126 "name": "Existed_Raid", 00:07:37.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.126 "strip_size_kb": 64, 00:07:37.126 "state": "configuring", 00:07:37.126 "raid_level": "concat", 00:07:37.126 "superblock": false, 00:07:37.126 "num_base_bdevs": 2, 00:07:37.126 "num_base_bdevs_discovered": 0, 00:07:37.126 "num_base_bdevs_operational": 2, 00:07:37.126 "base_bdevs_list": [ 00:07:37.126 { 00:07:37.126 "name": "BaseBdev1", 00:07:37.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.126 "is_configured": false, 00:07:37.126 "data_offset": 0, 00:07:37.126 "data_size": 0 00:07:37.126 }, 00:07:37.126 { 00:07:37.126 "name": "BaseBdev2", 00:07:37.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.126 "is_configured": false, 00:07:37.126 "data_offset": 0, 00:07:37.126 "data_size": 0 00:07:37.126 } 00:07:37.126 ] 00:07:37.126 }' 00:07:37.126 04:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.126 04:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.387 04:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:37.387 04:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.387 04:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.387 [2024-12-13 04:23:37.312669] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:37.387 [2024-12-13 04:23:37.312719] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:37.387 04:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.387 04:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:37.387 04:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.387 04:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.387 [2024-12-13 04:23:37.324655] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:37.387 [2024-12-13 04:23:37.324691] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:37.387 [2024-12-13 04:23:37.324699] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:37.387 [2024-12-13 04:23:37.324721] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:37.387 04:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.387 04:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:37.387 04:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.387 04:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.387 [2024-12-13 04:23:37.351377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:37.387 BaseBdev1 00:07:37.387 04:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.387 04:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:37.387 04:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:37.387 04:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:37.387 04:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:37.387 04:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:37.387 04:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:37.387 04:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:37.387 04:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.387 04:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.387 04:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.387 04:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:37.387 04:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.387 04:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.387 [ 00:07:37.387 { 00:07:37.387 "name": "BaseBdev1", 00:07:37.387 "aliases": [ 00:07:37.387 "f73806f3-eb34-412e-bda0-c4302ca32c6d" 00:07:37.387 ], 00:07:37.387 "product_name": "Malloc disk", 00:07:37.387 "block_size": 512, 00:07:37.387 "num_blocks": 65536, 00:07:37.387 "uuid": "f73806f3-eb34-412e-bda0-c4302ca32c6d", 00:07:37.387 "assigned_rate_limits": { 00:07:37.387 "rw_ios_per_sec": 0, 00:07:37.387 "rw_mbytes_per_sec": 0, 00:07:37.387 "r_mbytes_per_sec": 0, 00:07:37.387 "w_mbytes_per_sec": 0 00:07:37.387 }, 00:07:37.387 "claimed": true, 00:07:37.387 "claim_type": "exclusive_write", 00:07:37.387 "zoned": false, 00:07:37.387 "supported_io_types": { 00:07:37.387 "read": true, 00:07:37.387 "write": true, 00:07:37.387 "unmap": true, 00:07:37.387 "flush": true, 00:07:37.387 "reset": true, 00:07:37.387 "nvme_admin": false, 00:07:37.387 "nvme_io": false, 00:07:37.387 "nvme_io_md": false, 00:07:37.387 "write_zeroes": true, 00:07:37.387 "zcopy": true, 00:07:37.387 "get_zone_info": false, 00:07:37.387 "zone_management": false, 00:07:37.387 "zone_append": false, 00:07:37.387 "compare": false, 00:07:37.387 "compare_and_write": false, 00:07:37.387 "abort": true, 00:07:37.387 "seek_hole": false, 00:07:37.387 "seek_data": false, 00:07:37.387 "copy": true, 00:07:37.387 "nvme_iov_md": false 00:07:37.387 }, 00:07:37.387 "memory_domains": [ 00:07:37.387 { 00:07:37.387 "dma_device_id": "system", 00:07:37.387 "dma_device_type": 1 00:07:37.387 }, 00:07:37.387 { 00:07:37.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.387 "dma_device_type": 2 00:07:37.387 } 00:07:37.387 ], 00:07:37.387 "driver_specific": {} 00:07:37.387 } 00:07:37.387 ] 00:07:37.387 04:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.387 04:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:37.387 04:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:37.387 04:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:37.387 04:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:37.387 04:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:37.387 04:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.387 04:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.387 04:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.387 04:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.387 04:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.387 04:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.387 04:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.387 04:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.387 04:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.387 04:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.647 04:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.647 04:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.647 "name": "Existed_Raid", 00:07:37.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.647 "strip_size_kb": 64, 00:07:37.647 "state": "configuring", 00:07:37.647 "raid_level": "concat", 00:07:37.647 "superblock": false, 00:07:37.647 "num_base_bdevs": 2, 00:07:37.647 "num_base_bdevs_discovered": 1, 00:07:37.647 "num_base_bdevs_operational": 2, 00:07:37.647 "base_bdevs_list": [ 00:07:37.647 { 00:07:37.647 "name": "BaseBdev1", 00:07:37.647 "uuid": "f73806f3-eb34-412e-bda0-c4302ca32c6d", 00:07:37.647 "is_configured": true, 00:07:37.647 "data_offset": 0, 00:07:37.647 "data_size": 65536 00:07:37.647 }, 00:07:37.647 { 00:07:37.647 "name": "BaseBdev2", 00:07:37.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.647 "is_configured": false, 00:07:37.647 "data_offset": 0, 00:07:37.647 "data_size": 0 00:07:37.647 } 00:07:37.647 ] 00:07:37.648 }' 00:07:37.648 04:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.648 04:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.908 04:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:37.908 04:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.908 04:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.908 [2024-12-13 04:23:37.806639] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:37.908 [2024-12-13 04:23:37.806683] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:37.908 04:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.908 04:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:37.908 04:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.908 04:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.908 [2024-12-13 04:23:37.818627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:37.908 [2024-12-13 04:23:37.820798] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:37.908 [2024-12-13 04:23:37.820838] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:37.908 04:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.908 04:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:37.908 04:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:37.908 04:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:37.908 04:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:37.908 04:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:37.908 04:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:37.908 04:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.908 04:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.908 04:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.908 04:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.908 04:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.908 04:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.908 04:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.908 04:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.908 04:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.908 04:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.908 04:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.908 04:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.908 "name": "Existed_Raid", 00:07:37.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.908 "strip_size_kb": 64, 00:07:37.908 "state": "configuring", 00:07:37.908 "raid_level": "concat", 00:07:37.908 "superblock": false, 00:07:37.908 "num_base_bdevs": 2, 00:07:37.908 "num_base_bdevs_discovered": 1, 00:07:37.908 "num_base_bdevs_operational": 2, 00:07:37.908 "base_bdevs_list": [ 00:07:37.908 { 00:07:37.908 "name": "BaseBdev1", 00:07:37.908 "uuid": "f73806f3-eb34-412e-bda0-c4302ca32c6d", 00:07:37.908 "is_configured": true, 00:07:37.908 "data_offset": 0, 00:07:37.908 "data_size": 65536 00:07:37.908 }, 00:07:37.908 { 00:07:37.908 "name": "BaseBdev2", 00:07:37.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.908 "is_configured": false, 00:07:37.908 "data_offset": 0, 00:07:37.908 "data_size": 0 00:07:37.908 } 00:07:37.908 ] 00:07:37.908 }' 00:07:37.908 04:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.908 04:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.478 04:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:38.478 04:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.478 04:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.478 [2024-12-13 04:23:38.270385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:38.478 [2024-12-13 04:23:38.270432] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:38.478 [2024-12-13 04:23:38.270453] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:38.478 [2024-12-13 04:23:38.270757] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:38.478 [2024-12-13 04:23:38.270955] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:38.478 [2024-12-13 04:23:38.270983] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:38.478 [2024-12-13 04:23:38.271187] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:38.478 BaseBdev2 00:07:38.478 04:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.478 04:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:38.478 04:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:38.478 04:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:38.478 04:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:38.478 04:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:38.478 04:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:38.478 04:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:38.478 04:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.478 04:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.478 04:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.478 04:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:38.478 04:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.478 04:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.478 [ 00:07:38.478 { 00:07:38.478 "name": "BaseBdev2", 00:07:38.478 "aliases": [ 00:07:38.478 "bc384a1b-6dcb-4768-aac7-e60e19b18345" 00:07:38.478 ], 00:07:38.478 "product_name": "Malloc disk", 00:07:38.478 "block_size": 512, 00:07:38.478 "num_blocks": 65536, 00:07:38.478 "uuid": "bc384a1b-6dcb-4768-aac7-e60e19b18345", 00:07:38.478 "assigned_rate_limits": { 00:07:38.478 "rw_ios_per_sec": 0, 00:07:38.478 "rw_mbytes_per_sec": 0, 00:07:38.478 "r_mbytes_per_sec": 0, 00:07:38.478 "w_mbytes_per_sec": 0 00:07:38.478 }, 00:07:38.478 "claimed": true, 00:07:38.478 "claim_type": "exclusive_write", 00:07:38.478 "zoned": false, 00:07:38.478 "supported_io_types": { 00:07:38.478 "read": true, 00:07:38.478 "write": true, 00:07:38.478 "unmap": true, 00:07:38.478 "flush": true, 00:07:38.478 "reset": true, 00:07:38.478 "nvme_admin": false, 00:07:38.478 "nvme_io": false, 00:07:38.478 "nvme_io_md": false, 00:07:38.478 "write_zeroes": true, 00:07:38.478 "zcopy": true, 00:07:38.478 "get_zone_info": false, 00:07:38.478 "zone_management": false, 00:07:38.478 "zone_append": false, 00:07:38.478 "compare": false, 00:07:38.478 "compare_and_write": false, 00:07:38.478 "abort": true, 00:07:38.478 "seek_hole": false, 00:07:38.478 "seek_data": false, 00:07:38.478 "copy": true, 00:07:38.478 "nvme_iov_md": false 00:07:38.478 }, 00:07:38.478 "memory_domains": [ 00:07:38.478 { 00:07:38.478 "dma_device_id": "system", 00:07:38.478 "dma_device_type": 1 00:07:38.478 }, 00:07:38.478 { 00:07:38.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.478 "dma_device_type": 2 00:07:38.478 } 00:07:38.478 ], 00:07:38.478 "driver_specific": {} 00:07:38.478 } 00:07:38.478 ] 00:07:38.478 04:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.479 04:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:38.479 04:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:38.479 04:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:38.479 04:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:38.479 04:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:38.479 04:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:38.479 04:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:38.479 04:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:38.479 04:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:38.479 04:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.479 04:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.479 04:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.479 04:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.479 04:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.479 04:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:38.479 04:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.479 04:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.479 04:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.479 04:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.479 "name": "Existed_Raid", 00:07:38.479 "uuid": "bbf12fd9-4410-4ab3-ae2c-40db5d44412b", 00:07:38.479 "strip_size_kb": 64, 00:07:38.479 "state": "online", 00:07:38.479 "raid_level": "concat", 00:07:38.479 "superblock": false, 00:07:38.479 "num_base_bdevs": 2, 00:07:38.479 "num_base_bdevs_discovered": 2, 00:07:38.479 "num_base_bdevs_operational": 2, 00:07:38.479 "base_bdevs_list": [ 00:07:38.479 { 00:07:38.479 "name": "BaseBdev1", 00:07:38.479 "uuid": "f73806f3-eb34-412e-bda0-c4302ca32c6d", 00:07:38.479 "is_configured": true, 00:07:38.479 "data_offset": 0, 00:07:38.479 "data_size": 65536 00:07:38.479 }, 00:07:38.479 { 00:07:38.479 "name": "BaseBdev2", 00:07:38.479 "uuid": "bc384a1b-6dcb-4768-aac7-e60e19b18345", 00:07:38.479 "is_configured": true, 00:07:38.479 "data_offset": 0, 00:07:38.479 "data_size": 65536 00:07:38.479 } 00:07:38.479 ] 00:07:38.479 }' 00:07:38.479 04:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.479 04:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.740 04:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:38.740 04:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:38.740 04:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:38.740 04:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:38.740 04:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:38.740 04:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:38.740 04:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:38.740 04:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:38.740 04:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.740 04:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.000 [2024-12-13 04:23:38.757799] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:39.000 04:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.000 04:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:39.000 "name": "Existed_Raid", 00:07:39.000 "aliases": [ 00:07:39.000 "bbf12fd9-4410-4ab3-ae2c-40db5d44412b" 00:07:39.000 ], 00:07:39.000 "product_name": "Raid Volume", 00:07:39.000 "block_size": 512, 00:07:39.000 "num_blocks": 131072, 00:07:39.000 "uuid": "bbf12fd9-4410-4ab3-ae2c-40db5d44412b", 00:07:39.000 "assigned_rate_limits": { 00:07:39.000 "rw_ios_per_sec": 0, 00:07:39.000 "rw_mbytes_per_sec": 0, 00:07:39.000 "r_mbytes_per_sec": 0, 00:07:39.000 "w_mbytes_per_sec": 0 00:07:39.000 }, 00:07:39.000 "claimed": false, 00:07:39.000 "zoned": false, 00:07:39.000 "supported_io_types": { 00:07:39.000 "read": true, 00:07:39.000 "write": true, 00:07:39.000 "unmap": true, 00:07:39.000 "flush": true, 00:07:39.000 "reset": true, 00:07:39.000 "nvme_admin": false, 00:07:39.000 "nvme_io": false, 00:07:39.000 "nvme_io_md": false, 00:07:39.000 "write_zeroes": true, 00:07:39.000 "zcopy": false, 00:07:39.000 "get_zone_info": false, 00:07:39.000 "zone_management": false, 00:07:39.000 "zone_append": false, 00:07:39.000 "compare": false, 00:07:39.000 "compare_and_write": false, 00:07:39.000 "abort": false, 00:07:39.000 "seek_hole": false, 00:07:39.000 "seek_data": false, 00:07:39.000 "copy": false, 00:07:39.000 "nvme_iov_md": false 00:07:39.000 }, 00:07:39.000 "memory_domains": [ 00:07:39.000 { 00:07:39.000 "dma_device_id": "system", 00:07:39.000 "dma_device_type": 1 00:07:39.000 }, 00:07:39.000 { 00:07:39.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.000 "dma_device_type": 2 00:07:39.000 }, 00:07:39.000 { 00:07:39.000 "dma_device_id": "system", 00:07:39.000 "dma_device_type": 1 00:07:39.000 }, 00:07:39.000 { 00:07:39.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.000 "dma_device_type": 2 00:07:39.000 } 00:07:39.000 ], 00:07:39.000 "driver_specific": { 00:07:39.000 "raid": { 00:07:39.000 "uuid": "bbf12fd9-4410-4ab3-ae2c-40db5d44412b", 00:07:39.000 "strip_size_kb": 64, 00:07:39.000 "state": "online", 00:07:39.000 "raid_level": "concat", 00:07:39.001 "superblock": false, 00:07:39.001 "num_base_bdevs": 2, 00:07:39.001 "num_base_bdevs_discovered": 2, 00:07:39.001 "num_base_bdevs_operational": 2, 00:07:39.001 "base_bdevs_list": [ 00:07:39.001 { 00:07:39.001 "name": "BaseBdev1", 00:07:39.001 "uuid": "f73806f3-eb34-412e-bda0-c4302ca32c6d", 00:07:39.001 "is_configured": true, 00:07:39.001 "data_offset": 0, 00:07:39.001 "data_size": 65536 00:07:39.001 }, 00:07:39.001 { 00:07:39.001 "name": "BaseBdev2", 00:07:39.001 "uuid": "bc384a1b-6dcb-4768-aac7-e60e19b18345", 00:07:39.001 "is_configured": true, 00:07:39.001 "data_offset": 0, 00:07:39.001 "data_size": 65536 00:07:39.001 } 00:07:39.001 ] 00:07:39.001 } 00:07:39.001 } 00:07:39.001 }' 00:07:39.001 04:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:39.001 04:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:39.001 BaseBdev2' 00:07:39.001 04:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:39.001 04:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:39.001 04:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:39.001 04:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:39.001 04:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:39.001 04:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.001 04:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.001 04:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.001 04:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:39.001 04:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:39.001 04:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:39.001 04:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:39.001 04:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.001 04:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:39.001 04:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.001 04:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.001 04:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:39.001 04:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:39.001 04:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:39.001 04:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.001 04:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.001 [2024-12-13 04:23:38.981233] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:39.001 [2024-12-13 04:23:38.981262] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:39.001 [2024-12-13 04:23:38.981317] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:39.001 04:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.001 04:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:39.001 04:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:39.001 04:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:39.001 04:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:39.001 04:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:39.001 04:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:39.001 04:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:39.001 04:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:39.001 04:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:39.001 04:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:39.001 04:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:39.001 04:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.001 04:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.001 04:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.001 04:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.001 04:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.001 04:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.001 04:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.001 04:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:39.260 04:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.260 04:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.260 "name": "Existed_Raid", 00:07:39.260 "uuid": "bbf12fd9-4410-4ab3-ae2c-40db5d44412b", 00:07:39.260 "strip_size_kb": 64, 00:07:39.260 "state": "offline", 00:07:39.260 "raid_level": "concat", 00:07:39.260 "superblock": false, 00:07:39.260 "num_base_bdevs": 2, 00:07:39.260 "num_base_bdevs_discovered": 1, 00:07:39.260 "num_base_bdevs_operational": 1, 00:07:39.260 "base_bdevs_list": [ 00:07:39.260 { 00:07:39.260 "name": null, 00:07:39.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:39.260 "is_configured": false, 00:07:39.260 "data_offset": 0, 00:07:39.260 "data_size": 65536 00:07:39.260 }, 00:07:39.260 { 00:07:39.260 "name": "BaseBdev2", 00:07:39.260 "uuid": "bc384a1b-6dcb-4768-aac7-e60e19b18345", 00:07:39.260 "is_configured": true, 00:07:39.260 "data_offset": 0, 00:07:39.260 "data_size": 65536 00:07:39.260 } 00:07:39.260 ] 00:07:39.260 }' 00:07:39.261 04:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.261 04:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.521 04:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:39.521 04:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:39.521 04:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:39.521 04:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.521 04:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.521 04:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.521 04:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.521 04:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:39.521 04:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:39.521 04:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:39.521 04:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.521 04:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.521 [2024-12-13 04:23:39.505134] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:39.521 [2024-12-13 04:23:39.505186] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:39.521 04:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.521 04:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:39.521 04:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:39.521 04:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.521 04:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:39.521 04:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.521 04:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.780 04:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.780 04:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:39.780 04:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:39.780 04:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:39.780 04:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 74700 00:07:39.780 04:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 74700 ']' 00:07:39.780 04:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 74700 00:07:39.780 04:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:39.780 04:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:39.780 04:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74700 00:07:39.780 04:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:39.780 04:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:39.780 killing process with pid 74700 00:07:39.780 04:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74700' 00:07:39.780 04:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 74700 00:07:39.780 [2024-12-13 04:23:39.619177] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:39.780 04:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 74700 00:07:39.780 [2024-12-13 04:23:39.620735] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:40.041 04:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:40.041 00:07:40.041 real 0m3.964s 00:07:40.041 user 0m6.102s 00:07:40.041 sys 0m0.877s 00:07:40.041 04:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.041 04:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.041 ************************************ 00:07:40.041 END TEST raid_state_function_test 00:07:40.041 ************************************ 00:07:40.041 04:23:40 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:40.041 04:23:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:40.041 04:23:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.041 04:23:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:40.041 ************************************ 00:07:40.041 START TEST raid_state_function_test_sb 00:07:40.041 ************************************ 00:07:40.041 04:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:07:40.041 04:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:40.041 04:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:40.041 04:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:40.041 04:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:40.041 04:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:40.041 04:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:40.041 04:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:40.041 04:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:40.041 04:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:40.041 04:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:40.041 04:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:40.041 04:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:40.041 04:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:40.041 04:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:40.041 04:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:40.041 04:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:40.041 04:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:40.041 04:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:40.041 04:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:40.041 04:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:40.041 04:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:40.041 04:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:40.041 04:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:40.041 04:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74942 00:07:40.041 04:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:40.041 Process raid pid: 74942 00:07:40.041 04:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74942' 00:07:40.041 04:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74942 00:07:40.041 04:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 74942 ']' 00:07:40.041 04:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.041 04:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:40.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.041 04:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.041 04:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:40.041 04:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.301 [2024-12-13 04:23:40.111520] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:40.301 [2024-12-13 04:23:40.111644] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:40.301 [2024-12-13 04:23:40.258488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.301 [2024-12-13 04:23:40.296244] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.560 [2024-12-13 04:23:40.373827] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:40.560 [2024-12-13 04:23:40.373871] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:41.131 04:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:41.131 04:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:41.132 04:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:41.132 04:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.132 04:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.132 [2024-12-13 04:23:40.932659] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:41.132 [2024-12-13 04:23:40.932721] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:41.132 [2024-12-13 04:23:40.932730] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:41.132 [2024-12-13 04:23:40.932740] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:41.132 04:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.132 04:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:41.132 04:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:41.132 04:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:41.132 04:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:41.132 04:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.132 04:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:41.132 04:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.132 04:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.132 04:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.132 04:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.132 04:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.132 04:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.132 04:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.132 04:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.132 04:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.132 04:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.132 "name": "Existed_Raid", 00:07:41.132 "uuid": "5441b7e6-af9b-49b6-a2ac-8d22d06ee58c", 00:07:41.132 "strip_size_kb": 64, 00:07:41.132 "state": "configuring", 00:07:41.132 "raid_level": "concat", 00:07:41.132 "superblock": true, 00:07:41.132 "num_base_bdevs": 2, 00:07:41.132 "num_base_bdevs_discovered": 0, 00:07:41.132 "num_base_bdevs_operational": 2, 00:07:41.132 "base_bdevs_list": [ 00:07:41.132 { 00:07:41.132 "name": "BaseBdev1", 00:07:41.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.132 "is_configured": false, 00:07:41.132 "data_offset": 0, 00:07:41.132 "data_size": 0 00:07:41.132 }, 00:07:41.132 { 00:07:41.132 "name": "BaseBdev2", 00:07:41.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.132 "is_configured": false, 00:07:41.132 "data_offset": 0, 00:07:41.132 "data_size": 0 00:07:41.132 } 00:07:41.132 ] 00:07:41.132 }' 00:07:41.132 04:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.132 04:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.392 04:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:41.392 04:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.392 04:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.392 [2024-12-13 04:23:41.379940] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:41.392 [2024-12-13 04:23:41.379995] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:41.392 04:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.392 04:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:41.392 04:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.392 04:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.392 [2024-12-13 04:23:41.387953] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:41.392 [2024-12-13 04:23:41.387995] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:41.392 [2024-12-13 04:23:41.388003] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:41.392 [2024-12-13 04:23:41.388025] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:41.392 04:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.392 04:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:41.392 04:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.392 04:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.652 [2024-12-13 04:23:41.410859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:41.652 BaseBdev1 00:07:41.652 04:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.652 04:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:41.652 04:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:41.652 04:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:41.652 04:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:41.652 04:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:41.652 04:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:41.652 04:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:41.652 04:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.652 04:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.652 04:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.652 04:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:41.652 04:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.652 04:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.652 [ 00:07:41.652 { 00:07:41.652 "name": "BaseBdev1", 00:07:41.652 "aliases": [ 00:07:41.652 "543e182e-31d4-4093-b608-b060388914f9" 00:07:41.652 ], 00:07:41.652 "product_name": "Malloc disk", 00:07:41.652 "block_size": 512, 00:07:41.652 "num_blocks": 65536, 00:07:41.652 "uuid": "543e182e-31d4-4093-b608-b060388914f9", 00:07:41.652 "assigned_rate_limits": { 00:07:41.652 "rw_ios_per_sec": 0, 00:07:41.652 "rw_mbytes_per_sec": 0, 00:07:41.652 "r_mbytes_per_sec": 0, 00:07:41.652 "w_mbytes_per_sec": 0 00:07:41.652 }, 00:07:41.652 "claimed": true, 00:07:41.652 "claim_type": "exclusive_write", 00:07:41.652 "zoned": false, 00:07:41.652 "supported_io_types": { 00:07:41.652 "read": true, 00:07:41.652 "write": true, 00:07:41.652 "unmap": true, 00:07:41.652 "flush": true, 00:07:41.652 "reset": true, 00:07:41.652 "nvme_admin": false, 00:07:41.652 "nvme_io": false, 00:07:41.652 "nvme_io_md": false, 00:07:41.652 "write_zeroes": true, 00:07:41.652 "zcopy": true, 00:07:41.652 "get_zone_info": false, 00:07:41.652 "zone_management": false, 00:07:41.652 "zone_append": false, 00:07:41.652 "compare": false, 00:07:41.652 "compare_and_write": false, 00:07:41.652 "abort": true, 00:07:41.652 "seek_hole": false, 00:07:41.652 "seek_data": false, 00:07:41.652 "copy": true, 00:07:41.652 "nvme_iov_md": false 00:07:41.652 }, 00:07:41.652 "memory_domains": [ 00:07:41.652 { 00:07:41.652 "dma_device_id": "system", 00:07:41.652 "dma_device_type": 1 00:07:41.652 }, 00:07:41.652 { 00:07:41.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.652 "dma_device_type": 2 00:07:41.652 } 00:07:41.652 ], 00:07:41.652 "driver_specific": {} 00:07:41.652 } 00:07:41.652 ] 00:07:41.652 04:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.652 04:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:41.652 04:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:41.652 04:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:41.652 04:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:41.652 04:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:41.652 04:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.652 04:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:41.652 04:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.652 04:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.652 04:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.652 04:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.652 04:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.652 04:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.652 04:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.652 04:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.652 04:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.652 04:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.652 "name": "Existed_Raid", 00:07:41.652 "uuid": "39cdd758-a9f9-40d8-a5d4-b590c9019da5", 00:07:41.652 "strip_size_kb": 64, 00:07:41.652 "state": "configuring", 00:07:41.652 "raid_level": "concat", 00:07:41.652 "superblock": true, 00:07:41.652 "num_base_bdevs": 2, 00:07:41.652 "num_base_bdevs_discovered": 1, 00:07:41.652 "num_base_bdevs_operational": 2, 00:07:41.652 "base_bdevs_list": [ 00:07:41.652 { 00:07:41.652 "name": "BaseBdev1", 00:07:41.652 "uuid": "543e182e-31d4-4093-b608-b060388914f9", 00:07:41.652 "is_configured": true, 00:07:41.652 "data_offset": 2048, 00:07:41.652 "data_size": 63488 00:07:41.652 }, 00:07:41.652 { 00:07:41.652 "name": "BaseBdev2", 00:07:41.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.652 "is_configured": false, 00:07:41.652 "data_offset": 0, 00:07:41.652 "data_size": 0 00:07:41.652 } 00:07:41.652 ] 00:07:41.652 }' 00:07:41.652 04:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.652 04:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.913 04:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:41.913 04:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.913 04:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.913 [2024-12-13 04:23:41.838149] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:41.913 [2024-12-13 04:23:41.838191] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:41.913 04:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.913 04:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:41.913 04:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.913 04:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.913 [2024-12-13 04:23:41.850160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:41.913 [2024-12-13 04:23:41.852280] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:41.913 [2024-12-13 04:23:41.852321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:41.913 04:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.913 04:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:41.913 04:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:41.913 04:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:41.913 04:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:41.913 04:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:41.913 04:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:41.913 04:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.913 04:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:41.913 04:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.913 04:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.913 04:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.913 04:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.913 04:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.913 04:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.913 04:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.913 04:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.913 04:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.913 04:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.913 "name": "Existed_Raid", 00:07:41.913 "uuid": "68bcd934-9772-41ad-a06e-cdfe210778e9", 00:07:41.913 "strip_size_kb": 64, 00:07:41.913 "state": "configuring", 00:07:41.913 "raid_level": "concat", 00:07:41.913 "superblock": true, 00:07:41.913 "num_base_bdevs": 2, 00:07:41.913 "num_base_bdevs_discovered": 1, 00:07:41.913 "num_base_bdevs_operational": 2, 00:07:41.913 "base_bdevs_list": [ 00:07:41.913 { 00:07:41.913 "name": "BaseBdev1", 00:07:41.913 "uuid": "543e182e-31d4-4093-b608-b060388914f9", 00:07:41.913 "is_configured": true, 00:07:41.913 "data_offset": 2048, 00:07:41.913 "data_size": 63488 00:07:41.913 }, 00:07:41.913 { 00:07:41.913 "name": "BaseBdev2", 00:07:41.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.913 "is_configured": false, 00:07:41.913 "data_offset": 0, 00:07:41.913 "data_size": 0 00:07:41.913 } 00:07:41.913 ] 00:07:41.913 }' 00:07:41.913 04:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.913 04:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.483 04:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:42.483 04:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.483 04:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.483 [2024-12-13 04:23:42.317929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:42.483 [2024-12-13 04:23:42.318131] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:42.483 [2024-12-13 04:23:42.318147] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:42.483 BaseBdev2 00:07:42.483 [2024-12-13 04:23:42.318451] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:42.483 [2024-12-13 04:23:42.318622] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:42.483 [2024-12-13 04:23:42.318649] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:42.483 [2024-12-13 04:23:42.318773] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:42.483 04:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.483 04:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:42.483 04:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:42.483 04:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:42.483 04:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:42.483 04:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:42.483 04:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:42.483 04:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:42.483 04:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.483 04:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.483 04:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.483 04:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:42.483 04:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.483 04:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.483 [ 00:07:42.483 { 00:07:42.483 "name": "BaseBdev2", 00:07:42.483 "aliases": [ 00:07:42.483 "76f212d5-d8ee-4aac-8545-728240fab023" 00:07:42.483 ], 00:07:42.483 "product_name": "Malloc disk", 00:07:42.483 "block_size": 512, 00:07:42.483 "num_blocks": 65536, 00:07:42.483 "uuid": "76f212d5-d8ee-4aac-8545-728240fab023", 00:07:42.483 "assigned_rate_limits": { 00:07:42.483 "rw_ios_per_sec": 0, 00:07:42.483 "rw_mbytes_per_sec": 0, 00:07:42.483 "r_mbytes_per_sec": 0, 00:07:42.483 "w_mbytes_per_sec": 0 00:07:42.483 }, 00:07:42.483 "claimed": true, 00:07:42.483 "claim_type": "exclusive_write", 00:07:42.483 "zoned": false, 00:07:42.483 "supported_io_types": { 00:07:42.483 "read": true, 00:07:42.483 "write": true, 00:07:42.483 "unmap": true, 00:07:42.483 "flush": true, 00:07:42.483 "reset": true, 00:07:42.483 "nvme_admin": false, 00:07:42.483 "nvme_io": false, 00:07:42.483 "nvme_io_md": false, 00:07:42.483 "write_zeroes": true, 00:07:42.483 "zcopy": true, 00:07:42.483 "get_zone_info": false, 00:07:42.483 "zone_management": false, 00:07:42.484 "zone_append": false, 00:07:42.484 "compare": false, 00:07:42.484 "compare_and_write": false, 00:07:42.484 "abort": true, 00:07:42.484 "seek_hole": false, 00:07:42.484 "seek_data": false, 00:07:42.484 "copy": true, 00:07:42.484 "nvme_iov_md": false 00:07:42.484 }, 00:07:42.484 "memory_domains": [ 00:07:42.484 { 00:07:42.484 "dma_device_id": "system", 00:07:42.484 "dma_device_type": 1 00:07:42.484 }, 00:07:42.484 { 00:07:42.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:42.484 "dma_device_type": 2 00:07:42.484 } 00:07:42.484 ], 00:07:42.484 "driver_specific": {} 00:07:42.484 } 00:07:42.484 ] 00:07:42.484 04:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.484 04:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:42.484 04:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:42.484 04:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:42.484 04:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:42.484 04:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:42.484 04:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:42.484 04:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:42.484 04:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:42.484 04:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:42.484 04:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.484 04:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.484 04:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.484 04:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.484 04:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.484 04:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.484 04:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.484 04:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:42.484 04:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.484 04:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.484 "name": "Existed_Raid", 00:07:42.484 "uuid": "68bcd934-9772-41ad-a06e-cdfe210778e9", 00:07:42.484 "strip_size_kb": 64, 00:07:42.484 "state": "online", 00:07:42.484 "raid_level": "concat", 00:07:42.484 "superblock": true, 00:07:42.484 "num_base_bdevs": 2, 00:07:42.484 "num_base_bdevs_discovered": 2, 00:07:42.484 "num_base_bdevs_operational": 2, 00:07:42.484 "base_bdevs_list": [ 00:07:42.484 { 00:07:42.484 "name": "BaseBdev1", 00:07:42.484 "uuid": "543e182e-31d4-4093-b608-b060388914f9", 00:07:42.484 "is_configured": true, 00:07:42.484 "data_offset": 2048, 00:07:42.484 "data_size": 63488 00:07:42.484 }, 00:07:42.484 { 00:07:42.484 "name": "BaseBdev2", 00:07:42.484 "uuid": "76f212d5-d8ee-4aac-8545-728240fab023", 00:07:42.484 "is_configured": true, 00:07:42.484 "data_offset": 2048, 00:07:42.484 "data_size": 63488 00:07:42.484 } 00:07:42.484 ] 00:07:42.484 }' 00:07:42.484 04:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.484 04:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.054 04:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:43.054 04:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:43.054 04:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:43.054 04:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:43.054 04:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:43.054 04:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:43.054 04:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:43.054 04:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:43.054 04:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.054 04:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.054 [2024-12-13 04:23:42.785518] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:43.054 04:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.054 04:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:43.054 "name": "Existed_Raid", 00:07:43.054 "aliases": [ 00:07:43.054 "68bcd934-9772-41ad-a06e-cdfe210778e9" 00:07:43.054 ], 00:07:43.054 "product_name": "Raid Volume", 00:07:43.054 "block_size": 512, 00:07:43.054 "num_blocks": 126976, 00:07:43.054 "uuid": "68bcd934-9772-41ad-a06e-cdfe210778e9", 00:07:43.054 "assigned_rate_limits": { 00:07:43.054 "rw_ios_per_sec": 0, 00:07:43.054 "rw_mbytes_per_sec": 0, 00:07:43.054 "r_mbytes_per_sec": 0, 00:07:43.054 "w_mbytes_per_sec": 0 00:07:43.054 }, 00:07:43.054 "claimed": false, 00:07:43.054 "zoned": false, 00:07:43.054 "supported_io_types": { 00:07:43.054 "read": true, 00:07:43.054 "write": true, 00:07:43.054 "unmap": true, 00:07:43.054 "flush": true, 00:07:43.054 "reset": true, 00:07:43.054 "nvme_admin": false, 00:07:43.054 "nvme_io": false, 00:07:43.054 "nvme_io_md": false, 00:07:43.054 "write_zeroes": true, 00:07:43.054 "zcopy": false, 00:07:43.054 "get_zone_info": false, 00:07:43.054 "zone_management": false, 00:07:43.054 "zone_append": false, 00:07:43.054 "compare": false, 00:07:43.054 "compare_and_write": false, 00:07:43.054 "abort": false, 00:07:43.054 "seek_hole": false, 00:07:43.054 "seek_data": false, 00:07:43.054 "copy": false, 00:07:43.054 "nvme_iov_md": false 00:07:43.054 }, 00:07:43.054 "memory_domains": [ 00:07:43.054 { 00:07:43.054 "dma_device_id": "system", 00:07:43.054 "dma_device_type": 1 00:07:43.054 }, 00:07:43.054 { 00:07:43.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.054 "dma_device_type": 2 00:07:43.054 }, 00:07:43.054 { 00:07:43.054 "dma_device_id": "system", 00:07:43.054 "dma_device_type": 1 00:07:43.054 }, 00:07:43.054 { 00:07:43.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.054 "dma_device_type": 2 00:07:43.055 } 00:07:43.055 ], 00:07:43.055 "driver_specific": { 00:07:43.055 "raid": { 00:07:43.055 "uuid": "68bcd934-9772-41ad-a06e-cdfe210778e9", 00:07:43.055 "strip_size_kb": 64, 00:07:43.055 "state": "online", 00:07:43.055 "raid_level": "concat", 00:07:43.055 "superblock": true, 00:07:43.055 "num_base_bdevs": 2, 00:07:43.055 "num_base_bdevs_discovered": 2, 00:07:43.055 "num_base_bdevs_operational": 2, 00:07:43.055 "base_bdevs_list": [ 00:07:43.055 { 00:07:43.055 "name": "BaseBdev1", 00:07:43.055 "uuid": "543e182e-31d4-4093-b608-b060388914f9", 00:07:43.055 "is_configured": true, 00:07:43.055 "data_offset": 2048, 00:07:43.055 "data_size": 63488 00:07:43.055 }, 00:07:43.055 { 00:07:43.055 "name": "BaseBdev2", 00:07:43.055 "uuid": "76f212d5-d8ee-4aac-8545-728240fab023", 00:07:43.055 "is_configured": true, 00:07:43.055 "data_offset": 2048, 00:07:43.055 "data_size": 63488 00:07:43.055 } 00:07:43.055 ] 00:07:43.055 } 00:07:43.055 } 00:07:43.055 }' 00:07:43.055 04:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:43.055 04:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:43.055 BaseBdev2' 00:07:43.055 04:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.055 04:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:43.055 04:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:43.055 04:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:43.055 04:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.055 04:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.055 04:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.055 04:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.055 04:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:43.055 04:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:43.055 04:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:43.055 04:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:43.055 04:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.055 04:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.055 04:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.055 04:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.055 04:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:43.055 04:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:43.055 04:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:43.055 04:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.055 04:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.055 [2024-12-13 04:23:43.028916] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:43.055 [2024-12-13 04:23:43.028943] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:43.055 [2024-12-13 04:23:43.028994] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:43.055 04:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.055 04:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:43.055 04:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:43.055 04:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:43.055 04:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:43.055 04:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:43.055 04:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:43.055 04:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:43.055 04:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:43.055 04:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:43.055 04:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:43.055 04:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:43.055 04:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.055 04:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.055 04:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.055 04:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.055 04:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:43.055 04:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.055 04:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.055 04:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.315 04:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.315 04:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.315 "name": "Existed_Raid", 00:07:43.315 "uuid": "68bcd934-9772-41ad-a06e-cdfe210778e9", 00:07:43.316 "strip_size_kb": 64, 00:07:43.316 "state": "offline", 00:07:43.316 "raid_level": "concat", 00:07:43.316 "superblock": true, 00:07:43.316 "num_base_bdevs": 2, 00:07:43.316 "num_base_bdevs_discovered": 1, 00:07:43.316 "num_base_bdevs_operational": 1, 00:07:43.316 "base_bdevs_list": [ 00:07:43.316 { 00:07:43.316 "name": null, 00:07:43.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.316 "is_configured": false, 00:07:43.316 "data_offset": 0, 00:07:43.316 "data_size": 63488 00:07:43.316 }, 00:07:43.316 { 00:07:43.316 "name": "BaseBdev2", 00:07:43.316 "uuid": "76f212d5-d8ee-4aac-8545-728240fab023", 00:07:43.316 "is_configured": true, 00:07:43.316 "data_offset": 2048, 00:07:43.316 "data_size": 63488 00:07:43.316 } 00:07:43.316 ] 00:07:43.316 }' 00:07:43.316 04:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.316 04:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.581 04:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:43.581 04:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:43.581 04:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.581 04:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.581 04:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.581 04:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:43.581 04:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.581 04:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:43.581 04:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:43.581 04:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:43.581 04:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.581 04:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.581 [2024-12-13 04:23:43.512981] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:43.581 [2024-12-13 04:23:43.513030] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:43.581 04:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.581 04:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:43.581 04:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:43.581 04:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.581 04:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.581 04:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.581 04:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:43.581 04:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.581 04:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:43.581 04:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:43.581 04:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:43.581 04:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74942 00:07:43.581 04:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 74942 ']' 00:07:43.581 04:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 74942 00:07:43.857 04:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:43.857 04:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:43.857 04:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74942 00:07:43.857 04:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:43.857 04:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:43.857 04:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74942' 00:07:43.857 killing process with pid 74942 00:07:43.857 04:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 74942 00:07:43.857 [2024-12-13 04:23:43.623741] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:43.857 04:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 74942 00:07:43.857 [2024-12-13 04:23:43.625392] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:44.142 04:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:44.142 00:07:44.142 real 0m3.929s 00:07:44.142 user 0m6.058s 00:07:44.142 sys 0m0.845s 00:07:44.142 04:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.142 04:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.142 ************************************ 00:07:44.142 END TEST raid_state_function_test_sb 00:07:44.142 ************************************ 00:07:44.142 04:23:44 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:44.142 04:23:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:44.142 04:23:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.142 04:23:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:44.142 ************************************ 00:07:44.142 START TEST raid_superblock_test 00:07:44.142 ************************************ 00:07:44.142 04:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:07:44.142 04:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:44.142 04:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:44.142 04:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:44.142 04:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:44.142 04:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:44.142 04:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:44.142 04:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:44.142 04:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:44.142 04:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:44.142 04:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:44.142 04:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:44.142 04:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:44.142 04:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:44.142 04:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:44.142 04:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:44.142 04:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:44.142 04:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=75183 00:07:44.142 04:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:44.142 04:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 75183 00:07:44.142 04:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 75183 ']' 00:07:44.142 04:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.142 04:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:44.142 04:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.142 04:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:44.142 04:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.142 [2024-12-13 04:23:44.104550] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:44.142 [2024-12-13 04:23:44.104767] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75183 ] 00:07:44.402 [2024-12-13 04:23:44.260871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.402 [2024-12-13 04:23:44.298780] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.402 [2024-12-13 04:23:44.373935] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:44.402 [2024-12-13 04:23:44.374075] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:44.972 04:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:44.972 04:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:44.972 04:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:44.972 04:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:44.972 04:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:44.972 04:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:44.972 04:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:44.972 04:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:44.972 04:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:44.972 04:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:44.972 04:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:44.972 04:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.972 04:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.972 malloc1 00:07:44.972 04:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.972 04:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:44.972 04:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.972 04:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.972 [2024-12-13 04:23:44.970347] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:44.972 [2024-12-13 04:23:44.970478] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:44.972 [2024-12-13 04:23:44.970517] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:07:44.972 [2024-12-13 04:23:44.970556] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:44.972 [2024-12-13 04:23:44.972952] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:44.972 [2024-12-13 04:23:44.973024] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:44.972 pt1 00:07:44.972 04:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.972 04:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:44.972 04:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:44.972 04:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:44.972 04:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:44.972 04:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:44.973 04:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:44.973 04:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:44.973 04:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:44.973 04:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:44.973 04:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.973 04:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.232 malloc2 00:07:45.232 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.232 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:45.232 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.232 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.232 [2024-12-13 04:23:45.008968] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:45.232 [2024-12-13 04:23:45.009063] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:45.232 [2024-12-13 04:23:45.009097] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:45.232 [2024-12-13 04:23:45.009126] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:45.232 [2024-12-13 04:23:45.011415] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:45.232 [2024-12-13 04:23:45.011495] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:45.232 pt2 00:07:45.232 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.232 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:45.232 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:45.232 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:45.232 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.232 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.232 [2024-12-13 04:23:45.021003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:45.232 [2024-12-13 04:23:45.023054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:45.232 [2024-12-13 04:23:45.023212] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:45.232 [2024-12-13 04:23:45.023231] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:45.232 [2024-12-13 04:23:45.023492] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:45.232 [2024-12-13 04:23:45.023634] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:45.232 [2024-12-13 04:23:45.023644] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:07:45.232 [2024-12-13 04:23:45.023785] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:45.232 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.232 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:45.232 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:45.232 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:45.232 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:45.232 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:45.232 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:45.232 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.232 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.232 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.232 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.232 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.232 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:45.232 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.232 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.232 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.232 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.232 "name": "raid_bdev1", 00:07:45.232 "uuid": "e6b4e1b9-557f-4416-90ed-8379dac61392", 00:07:45.232 "strip_size_kb": 64, 00:07:45.232 "state": "online", 00:07:45.232 "raid_level": "concat", 00:07:45.232 "superblock": true, 00:07:45.232 "num_base_bdevs": 2, 00:07:45.232 "num_base_bdevs_discovered": 2, 00:07:45.232 "num_base_bdevs_operational": 2, 00:07:45.232 "base_bdevs_list": [ 00:07:45.232 { 00:07:45.232 "name": "pt1", 00:07:45.232 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:45.232 "is_configured": true, 00:07:45.232 "data_offset": 2048, 00:07:45.232 "data_size": 63488 00:07:45.232 }, 00:07:45.232 { 00:07:45.232 "name": "pt2", 00:07:45.232 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:45.232 "is_configured": true, 00:07:45.232 "data_offset": 2048, 00:07:45.232 "data_size": 63488 00:07:45.232 } 00:07:45.232 ] 00:07:45.232 }' 00:07:45.232 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.232 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.491 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:45.491 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:45.491 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:45.491 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:45.491 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:45.491 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:45.491 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:45.491 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:45.491 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.491 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.491 [2024-12-13 04:23:45.452763] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:45.491 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.491 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:45.491 "name": "raid_bdev1", 00:07:45.491 "aliases": [ 00:07:45.491 "e6b4e1b9-557f-4416-90ed-8379dac61392" 00:07:45.491 ], 00:07:45.491 "product_name": "Raid Volume", 00:07:45.491 "block_size": 512, 00:07:45.491 "num_blocks": 126976, 00:07:45.491 "uuid": "e6b4e1b9-557f-4416-90ed-8379dac61392", 00:07:45.491 "assigned_rate_limits": { 00:07:45.492 "rw_ios_per_sec": 0, 00:07:45.492 "rw_mbytes_per_sec": 0, 00:07:45.492 "r_mbytes_per_sec": 0, 00:07:45.492 "w_mbytes_per_sec": 0 00:07:45.492 }, 00:07:45.492 "claimed": false, 00:07:45.492 "zoned": false, 00:07:45.492 "supported_io_types": { 00:07:45.492 "read": true, 00:07:45.492 "write": true, 00:07:45.492 "unmap": true, 00:07:45.492 "flush": true, 00:07:45.492 "reset": true, 00:07:45.492 "nvme_admin": false, 00:07:45.492 "nvme_io": false, 00:07:45.492 "nvme_io_md": false, 00:07:45.492 "write_zeroes": true, 00:07:45.492 "zcopy": false, 00:07:45.492 "get_zone_info": false, 00:07:45.492 "zone_management": false, 00:07:45.492 "zone_append": false, 00:07:45.492 "compare": false, 00:07:45.492 "compare_and_write": false, 00:07:45.492 "abort": false, 00:07:45.492 "seek_hole": false, 00:07:45.492 "seek_data": false, 00:07:45.492 "copy": false, 00:07:45.492 "nvme_iov_md": false 00:07:45.492 }, 00:07:45.492 "memory_domains": [ 00:07:45.492 { 00:07:45.492 "dma_device_id": "system", 00:07:45.492 "dma_device_type": 1 00:07:45.492 }, 00:07:45.492 { 00:07:45.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.492 "dma_device_type": 2 00:07:45.492 }, 00:07:45.492 { 00:07:45.492 "dma_device_id": "system", 00:07:45.492 "dma_device_type": 1 00:07:45.492 }, 00:07:45.492 { 00:07:45.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.492 "dma_device_type": 2 00:07:45.492 } 00:07:45.492 ], 00:07:45.492 "driver_specific": { 00:07:45.492 "raid": { 00:07:45.492 "uuid": "e6b4e1b9-557f-4416-90ed-8379dac61392", 00:07:45.492 "strip_size_kb": 64, 00:07:45.492 "state": "online", 00:07:45.492 "raid_level": "concat", 00:07:45.492 "superblock": true, 00:07:45.492 "num_base_bdevs": 2, 00:07:45.492 "num_base_bdevs_discovered": 2, 00:07:45.492 "num_base_bdevs_operational": 2, 00:07:45.492 "base_bdevs_list": [ 00:07:45.492 { 00:07:45.492 "name": "pt1", 00:07:45.492 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:45.492 "is_configured": true, 00:07:45.492 "data_offset": 2048, 00:07:45.492 "data_size": 63488 00:07:45.492 }, 00:07:45.492 { 00:07:45.492 "name": "pt2", 00:07:45.492 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:45.492 "is_configured": true, 00:07:45.492 "data_offset": 2048, 00:07:45.492 "data_size": 63488 00:07:45.492 } 00:07:45.492 ] 00:07:45.492 } 00:07:45.492 } 00:07:45.492 }' 00:07:45.492 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:45.752 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:45.752 pt2' 00:07:45.752 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:45.752 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:45.752 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:45.752 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:45.752 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:45.752 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.752 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.752 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.752 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:45.752 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:45.752 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:45.752 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:45.752 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.752 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.752 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:45.752 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.752 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:45.752 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:45.752 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:45.752 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:45.752 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.752 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.752 [2024-12-13 04:23:45.684219] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:45.752 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.752 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e6b4e1b9-557f-4416-90ed-8379dac61392 00:07:45.752 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e6b4e1b9-557f-4416-90ed-8379dac61392 ']' 00:07:45.752 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:45.752 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.752 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.752 [2024-12-13 04:23:45.719932] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:45.752 [2024-12-13 04:23:45.719960] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:45.752 [2024-12-13 04:23:45.720041] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:45.752 [2024-12-13 04:23:45.720103] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:45.752 [2024-12-13 04:23:45.720116] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:07:45.752 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.752 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.752 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:45.752 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.752 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.752 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.012 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:46.012 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:46.012 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:46.012 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:46.012 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.013 [2024-12-13 04:23:45.851726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:46.013 [2024-12-13 04:23:45.853819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:46.013 [2024-12-13 04:23:45.853909] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:46.013 [2024-12-13 04:23:45.853952] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:46.013 [2024-12-13 04:23:45.853967] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:46.013 [2024-12-13 04:23:45.853975] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:07:46.013 request: 00:07:46.013 { 00:07:46.013 "name": "raid_bdev1", 00:07:46.013 "raid_level": "concat", 00:07:46.013 "base_bdevs": [ 00:07:46.013 "malloc1", 00:07:46.013 "malloc2" 00:07:46.013 ], 00:07:46.013 "strip_size_kb": 64, 00:07:46.013 "superblock": false, 00:07:46.013 "method": "bdev_raid_create", 00:07:46.013 "req_id": 1 00:07:46.013 } 00:07:46.013 Got JSON-RPC error response 00:07:46.013 response: 00:07:46.013 { 00:07:46.013 "code": -17, 00:07:46.013 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:46.013 } 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.013 [2024-12-13 04:23:45.903641] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:46.013 [2024-12-13 04:23:45.903694] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:46.013 [2024-12-13 04:23:45.903714] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:46.013 [2024-12-13 04:23:45.903723] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:46.013 [2024-12-13 04:23:45.906082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:46.013 [2024-12-13 04:23:45.906187] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:46.013 [2024-12-13 04:23:45.906259] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:46.013 [2024-12-13 04:23:45.906291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:46.013 pt1 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.013 "name": "raid_bdev1", 00:07:46.013 "uuid": "e6b4e1b9-557f-4416-90ed-8379dac61392", 00:07:46.013 "strip_size_kb": 64, 00:07:46.013 "state": "configuring", 00:07:46.013 "raid_level": "concat", 00:07:46.013 "superblock": true, 00:07:46.013 "num_base_bdevs": 2, 00:07:46.013 "num_base_bdevs_discovered": 1, 00:07:46.013 "num_base_bdevs_operational": 2, 00:07:46.013 "base_bdevs_list": [ 00:07:46.013 { 00:07:46.013 "name": "pt1", 00:07:46.013 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:46.013 "is_configured": true, 00:07:46.013 "data_offset": 2048, 00:07:46.013 "data_size": 63488 00:07:46.013 }, 00:07:46.013 { 00:07:46.013 "name": null, 00:07:46.013 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:46.013 "is_configured": false, 00:07:46.013 "data_offset": 2048, 00:07:46.013 "data_size": 63488 00:07:46.013 } 00:07:46.013 ] 00:07:46.013 }' 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.013 04:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.583 04:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:46.583 04:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:46.583 04:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:46.583 04:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:46.583 04:23:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.583 04:23:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.583 [2024-12-13 04:23:46.395046] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:46.583 [2024-12-13 04:23:46.395193] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:46.583 [2024-12-13 04:23:46.395235] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:46.583 [2024-12-13 04:23:46.395266] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:46.583 [2024-12-13 04:23:46.395766] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:46.583 [2024-12-13 04:23:46.395824] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:46.583 [2024-12-13 04:23:46.395942] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:46.583 [2024-12-13 04:23:46.395995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:46.583 [2024-12-13 04:23:46.396132] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:46.583 [2024-12-13 04:23:46.396170] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:46.583 [2024-12-13 04:23:46.396485] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:07:46.583 [2024-12-13 04:23:46.396662] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:46.583 [2024-12-13 04:23:46.396710] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:46.583 [2024-12-13 04:23:46.396863] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:46.583 pt2 00:07:46.583 04:23:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.583 04:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:46.583 04:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:46.583 04:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:46.583 04:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:46.583 04:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:46.583 04:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:46.583 04:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:46.583 04:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:46.583 04:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.583 04:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.583 04:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.583 04:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.583 04:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.583 04:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:46.583 04:23:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.583 04:23:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.583 04:23:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.583 04:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.583 "name": "raid_bdev1", 00:07:46.583 "uuid": "e6b4e1b9-557f-4416-90ed-8379dac61392", 00:07:46.583 "strip_size_kb": 64, 00:07:46.583 "state": "online", 00:07:46.583 "raid_level": "concat", 00:07:46.583 "superblock": true, 00:07:46.583 "num_base_bdevs": 2, 00:07:46.583 "num_base_bdevs_discovered": 2, 00:07:46.583 "num_base_bdevs_operational": 2, 00:07:46.583 "base_bdevs_list": [ 00:07:46.583 { 00:07:46.583 "name": "pt1", 00:07:46.583 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:46.583 "is_configured": true, 00:07:46.583 "data_offset": 2048, 00:07:46.583 "data_size": 63488 00:07:46.583 }, 00:07:46.583 { 00:07:46.583 "name": "pt2", 00:07:46.584 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:46.584 "is_configured": true, 00:07:46.584 "data_offset": 2048, 00:07:46.584 "data_size": 63488 00:07:46.584 } 00:07:46.584 ] 00:07:46.584 }' 00:07:46.584 04:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.584 04:23:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.843 04:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:46.843 04:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:46.843 04:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:46.843 04:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:46.843 04:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:46.843 04:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:46.843 04:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:46.843 04:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:46.843 04:23:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.843 04:23:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.843 [2024-12-13 04:23:46.822628] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:46.843 04:23:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.112 04:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:47.112 "name": "raid_bdev1", 00:07:47.112 "aliases": [ 00:07:47.112 "e6b4e1b9-557f-4416-90ed-8379dac61392" 00:07:47.112 ], 00:07:47.112 "product_name": "Raid Volume", 00:07:47.112 "block_size": 512, 00:07:47.112 "num_blocks": 126976, 00:07:47.112 "uuid": "e6b4e1b9-557f-4416-90ed-8379dac61392", 00:07:47.112 "assigned_rate_limits": { 00:07:47.113 "rw_ios_per_sec": 0, 00:07:47.113 "rw_mbytes_per_sec": 0, 00:07:47.113 "r_mbytes_per_sec": 0, 00:07:47.113 "w_mbytes_per_sec": 0 00:07:47.113 }, 00:07:47.113 "claimed": false, 00:07:47.113 "zoned": false, 00:07:47.113 "supported_io_types": { 00:07:47.113 "read": true, 00:07:47.113 "write": true, 00:07:47.113 "unmap": true, 00:07:47.113 "flush": true, 00:07:47.113 "reset": true, 00:07:47.113 "nvme_admin": false, 00:07:47.113 "nvme_io": false, 00:07:47.113 "nvme_io_md": false, 00:07:47.113 "write_zeroes": true, 00:07:47.113 "zcopy": false, 00:07:47.113 "get_zone_info": false, 00:07:47.113 "zone_management": false, 00:07:47.113 "zone_append": false, 00:07:47.113 "compare": false, 00:07:47.113 "compare_and_write": false, 00:07:47.113 "abort": false, 00:07:47.113 "seek_hole": false, 00:07:47.113 "seek_data": false, 00:07:47.113 "copy": false, 00:07:47.113 "nvme_iov_md": false 00:07:47.113 }, 00:07:47.113 "memory_domains": [ 00:07:47.113 { 00:07:47.113 "dma_device_id": "system", 00:07:47.113 "dma_device_type": 1 00:07:47.113 }, 00:07:47.113 { 00:07:47.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.113 "dma_device_type": 2 00:07:47.113 }, 00:07:47.113 { 00:07:47.113 "dma_device_id": "system", 00:07:47.113 "dma_device_type": 1 00:07:47.113 }, 00:07:47.113 { 00:07:47.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.113 "dma_device_type": 2 00:07:47.113 } 00:07:47.113 ], 00:07:47.113 "driver_specific": { 00:07:47.113 "raid": { 00:07:47.113 "uuid": "e6b4e1b9-557f-4416-90ed-8379dac61392", 00:07:47.113 "strip_size_kb": 64, 00:07:47.113 "state": "online", 00:07:47.113 "raid_level": "concat", 00:07:47.113 "superblock": true, 00:07:47.113 "num_base_bdevs": 2, 00:07:47.113 "num_base_bdevs_discovered": 2, 00:07:47.113 "num_base_bdevs_operational": 2, 00:07:47.113 "base_bdevs_list": [ 00:07:47.113 { 00:07:47.113 "name": "pt1", 00:07:47.113 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:47.113 "is_configured": true, 00:07:47.113 "data_offset": 2048, 00:07:47.113 "data_size": 63488 00:07:47.113 }, 00:07:47.113 { 00:07:47.113 "name": "pt2", 00:07:47.113 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:47.113 "is_configured": true, 00:07:47.113 "data_offset": 2048, 00:07:47.113 "data_size": 63488 00:07:47.113 } 00:07:47.113 ] 00:07:47.113 } 00:07:47.113 } 00:07:47.113 }' 00:07:47.113 04:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:47.113 04:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:47.113 pt2' 00:07:47.113 04:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.113 04:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:47.113 04:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:47.113 04:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:47.113 04:23:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.113 04:23:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.113 04:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.113 04:23:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.114 04:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:47.114 04:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:47.114 04:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:47.114 04:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:47.114 04:23:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.114 04:23:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.114 04:23:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.114 04:23:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.114 04:23:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:47.114 04:23:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:47.114 04:23:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:47.114 04:23:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:47.114 04:23:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.114 04:23:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.114 [2024-12-13 04:23:47.022216] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:47.114 04:23:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.114 04:23:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e6b4e1b9-557f-4416-90ed-8379dac61392 '!=' e6b4e1b9-557f-4416-90ed-8379dac61392 ']' 00:07:47.114 04:23:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:47.114 04:23:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:47.114 04:23:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:47.114 04:23:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 75183 00:07:47.114 04:23:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 75183 ']' 00:07:47.114 04:23:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 75183 00:07:47.114 04:23:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:47.114 04:23:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:47.114 04:23:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75183 00:07:47.114 04:23:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:47.114 04:23:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:47.114 04:23:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75183' 00:07:47.114 killing process with pid 75183 00:07:47.114 04:23:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 75183 00:07:47.114 [2024-12-13 04:23:47.095809] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:47.114 [2024-12-13 04:23:47.095941] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:47.114 04:23:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 75183 00:07:47.114 [2024-12-13 04:23:47.096031] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:47.114 [2024-12-13 04:23:47.096041] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:47.377 [2024-12-13 04:23:47.138206] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:47.637 04:23:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:47.637 00:07:47.637 real 0m3.440s 00:07:47.637 user 0m5.153s 00:07:47.637 sys 0m0.787s 00:07:47.637 04:23:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.637 04:23:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.637 ************************************ 00:07:47.637 END TEST raid_superblock_test 00:07:47.637 ************************************ 00:07:47.637 04:23:47 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:47.637 04:23:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:47.637 04:23:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.637 04:23:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:47.637 ************************************ 00:07:47.637 START TEST raid_read_error_test 00:07:47.637 ************************************ 00:07:47.637 04:23:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:07:47.637 04:23:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:47.637 04:23:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:47.637 04:23:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:47.637 04:23:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:47.637 04:23:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:47.637 04:23:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:47.637 04:23:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:47.637 04:23:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:47.637 04:23:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:47.637 04:23:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:47.637 04:23:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:47.637 04:23:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:47.637 04:23:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:47.637 04:23:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:47.637 04:23:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:47.637 04:23:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:47.637 04:23:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:47.637 04:23:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:47.637 04:23:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:47.637 04:23:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:47.637 04:23:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:47.637 04:23:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:47.637 04:23:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.V14DOrYtS8 00:07:47.637 04:23:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75378 00:07:47.637 04:23:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75378 00:07:47.637 04:23:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:47.637 04:23:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75378 ']' 00:07:47.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.637 04:23:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.637 04:23:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:47.637 04:23:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.637 04:23:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:47.637 04:23:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.637 [2024-12-13 04:23:47.621397] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:47.637 [2024-12-13 04:23:47.621617] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75378 ] 00:07:47.898 [2024-12-13 04:23:47.778623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.898 [2024-12-13 04:23:47.819458] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.898 [2024-12-13 04:23:47.896773] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:47.898 [2024-12-13 04:23:47.896913] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:48.468 04:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:48.468 04:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:48.468 04:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:48.468 04:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:48.468 04:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.468 04:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.468 BaseBdev1_malloc 00:07:48.468 04:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.468 04:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:48.468 04:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.468 04:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.468 true 00:07:48.468 04:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.468 04:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:48.468 04:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.468 04:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.728 [2024-12-13 04:23:48.485996] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:48.728 [2024-12-13 04:23:48.486143] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:48.728 [2024-12-13 04:23:48.486172] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:48.728 [2024-12-13 04:23:48.486181] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:48.728 [2024-12-13 04:23:48.488595] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:48.728 [2024-12-13 04:23:48.488686] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:48.728 BaseBdev1 00:07:48.728 04:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.728 04:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:48.728 04:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:48.728 04:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.728 04:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.728 BaseBdev2_malloc 00:07:48.728 04:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.728 04:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:48.728 04:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.728 04:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.728 true 00:07:48.728 04:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.728 04:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:48.728 04:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.728 04:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.728 [2024-12-13 04:23:48.532341] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:48.728 [2024-12-13 04:23:48.532456] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:48.728 [2024-12-13 04:23:48.532484] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:48.728 [2024-12-13 04:23:48.532502] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:48.728 [2024-12-13 04:23:48.534807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:48.728 [2024-12-13 04:23:48.534842] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:48.728 BaseBdev2 00:07:48.728 04:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.728 04:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:48.728 04:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.728 04:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.728 [2024-12-13 04:23:48.544374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:48.728 [2024-12-13 04:23:48.546522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:48.728 [2024-12-13 04:23:48.546766] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:48.728 [2024-12-13 04:23:48.546819] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:48.728 [2024-12-13 04:23:48.547113] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:07:48.728 [2024-12-13 04:23:48.547314] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:48.728 [2024-12-13 04:23:48.547362] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:48.728 [2024-12-13 04:23:48.547549] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:48.728 04:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.728 04:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:48.728 04:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:48.728 04:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:48.728 04:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:48.728 04:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:48.728 04:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:48.728 04:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.728 04:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.728 04:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.728 04:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.728 04:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.728 04:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:48.728 04:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.728 04:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.728 04:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.728 04:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.728 "name": "raid_bdev1", 00:07:48.728 "uuid": "89926d75-21de-47f6-8b0e-55711adbeb7d", 00:07:48.728 "strip_size_kb": 64, 00:07:48.728 "state": "online", 00:07:48.728 "raid_level": "concat", 00:07:48.728 "superblock": true, 00:07:48.728 "num_base_bdevs": 2, 00:07:48.728 "num_base_bdevs_discovered": 2, 00:07:48.728 "num_base_bdevs_operational": 2, 00:07:48.728 "base_bdevs_list": [ 00:07:48.728 { 00:07:48.728 "name": "BaseBdev1", 00:07:48.728 "uuid": "c9a5969c-ff39-52cd-bded-3527fa329818", 00:07:48.728 "is_configured": true, 00:07:48.728 "data_offset": 2048, 00:07:48.728 "data_size": 63488 00:07:48.728 }, 00:07:48.728 { 00:07:48.728 "name": "BaseBdev2", 00:07:48.728 "uuid": "af0321ce-5d42-579b-b27b-88d0f0b71f77", 00:07:48.728 "is_configured": true, 00:07:48.728 "data_offset": 2048, 00:07:48.728 "data_size": 63488 00:07:48.728 } 00:07:48.728 ] 00:07:48.728 }' 00:07:48.728 04:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.728 04:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.298 04:23:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:49.298 04:23:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:49.298 [2024-12-13 04:23:49.095929] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:07:50.238 04:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:50.238 04:23:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.238 04:23:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.238 04:23:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.238 04:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:50.238 04:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:50.238 04:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:50.238 04:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:50.238 04:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:50.238 04:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:50.238 04:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:50.239 04:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:50.239 04:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.239 04:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.239 04:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.239 04:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.239 04:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.239 04:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.239 04:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:50.239 04:23:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.239 04:23:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.239 04:23:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.239 04:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.239 "name": "raid_bdev1", 00:07:50.239 "uuid": "89926d75-21de-47f6-8b0e-55711adbeb7d", 00:07:50.239 "strip_size_kb": 64, 00:07:50.239 "state": "online", 00:07:50.239 "raid_level": "concat", 00:07:50.239 "superblock": true, 00:07:50.239 "num_base_bdevs": 2, 00:07:50.239 "num_base_bdevs_discovered": 2, 00:07:50.239 "num_base_bdevs_operational": 2, 00:07:50.239 "base_bdevs_list": [ 00:07:50.239 { 00:07:50.239 "name": "BaseBdev1", 00:07:50.239 "uuid": "c9a5969c-ff39-52cd-bded-3527fa329818", 00:07:50.239 "is_configured": true, 00:07:50.239 "data_offset": 2048, 00:07:50.239 "data_size": 63488 00:07:50.239 }, 00:07:50.239 { 00:07:50.239 "name": "BaseBdev2", 00:07:50.239 "uuid": "af0321ce-5d42-579b-b27b-88d0f0b71f77", 00:07:50.239 "is_configured": true, 00:07:50.239 "data_offset": 2048, 00:07:50.239 "data_size": 63488 00:07:50.239 } 00:07:50.239 ] 00:07:50.239 }' 00:07:50.239 04:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.239 04:23:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.499 04:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:50.499 04:23:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.499 04:23:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.499 [2024-12-13 04:23:50.492224] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:50.499 [2024-12-13 04:23:50.492347] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:50.499 [2024-12-13 04:23:50.494902] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:50.499 [2024-12-13 04:23:50.494999] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:50.499 [2024-12-13 04:23:50.495062] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:50.499 [2024-12-13 04:23:50.495116] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:50.499 { 00:07:50.499 "results": [ 00:07:50.499 { 00:07:50.499 "job": "raid_bdev1", 00:07:50.499 "core_mask": "0x1", 00:07:50.499 "workload": "randrw", 00:07:50.499 "percentage": 50, 00:07:50.499 "status": "finished", 00:07:50.499 "queue_depth": 1, 00:07:50.499 "io_size": 131072, 00:07:50.499 "runtime": 1.397275, 00:07:50.499 "iops": 15546.68909126693, 00:07:50.499 "mibps": 1943.3361364083662, 00:07:50.499 "io_failed": 1, 00:07:50.499 "io_timeout": 0, 00:07:50.499 "avg_latency_us": 89.69319039413877, 00:07:50.499 "min_latency_us": 25.2646288209607, 00:07:50.499 "max_latency_us": 1287.825327510917 00:07:50.499 } 00:07:50.499 ], 00:07:50.499 "core_count": 1 00:07:50.499 } 00:07:50.499 04:23:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.499 04:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75378 00:07:50.499 04:23:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75378 ']' 00:07:50.499 04:23:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75378 00:07:50.499 04:23:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:50.499 04:23:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:50.499 04:23:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75378 00:07:50.760 killing process with pid 75378 00:07:50.760 04:23:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:50.760 04:23:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:50.760 04:23:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75378' 00:07:50.760 04:23:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75378 00:07:50.760 [2024-12-13 04:23:50.534270] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:50.760 04:23:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75378 00:07:50.760 [2024-12-13 04:23:50.562835] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:51.020 04:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.V14DOrYtS8 00:07:51.020 04:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:51.020 04:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:51.020 04:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:07:51.020 04:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:51.020 ************************************ 00:07:51.020 END TEST raid_read_error_test 00:07:51.020 ************************************ 00:07:51.020 04:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:51.020 04:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:51.020 04:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:07:51.020 00:07:51.020 real 0m3.368s 00:07:51.020 user 0m4.211s 00:07:51.020 sys 0m0.573s 00:07:51.020 04:23:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.020 04:23:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.020 04:23:50 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:07:51.020 04:23:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:51.020 04:23:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.020 04:23:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:51.020 ************************************ 00:07:51.020 START TEST raid_write_error_test 00:07:51.020 ************************************ 00:07:51.020 04:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:07:51.020 04:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:51.020 04:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:51.020 04:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:51.020 04:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:51.020 04:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:51.020 04:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:51.020 04:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:51.020 04:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:51.020 04:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:51.020 04:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:51.020 04:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:51.020 04:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:51.020 04:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:51.020 04:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:51.020 04:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:51.020 04:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:51.020 04:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:51.020 04:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:51.020 04:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:51.020 04:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:51.020 04:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:51.020 04:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:51.020 04:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.JSj8b8gfFF 00:07:51.020 04:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75512 00:07:51.020 04:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:51.020 04:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75512 00:07:51.020 04:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75512 ']' 00:07:51.020 04:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.020 04:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:51.020 04:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.020 04:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:51.020 04:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.281 [2024-12-13 04:23:51.070142] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:51.281 [2024-12-13 04:23:51.070327] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75512 ] 00:07:51.281 [2024-12-13 04:23:51.226395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.281 [2024-12-13 04:23:51.267555] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.541 [2024-12-13 04:23:51.347030] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:51.541 [2024-12-13 04:23:51.347075] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:52.111 04:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:52.111 04:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:52.111 04:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:52.111 04:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:52.111 04:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.111 04:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.111 BaseBdev1_malloc 00:07:52.111 04:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.111 04:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:52.111 04:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.111 04:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.111 true 00:07:52.111 04:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.111 04:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:52.111 04:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.111 04:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.111 [2024-12-13 04:23:51.947921] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:52.111 [2024-12-13 04:23:51.947987] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:52.111 [2024-12-13 04:23:51.948011] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:52.111 [2024-12-13 04:23:51.948029] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:52.111 [2024-12-13 04:23:51.950589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:52.111 [2024-12-13 04:23:51.950622] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:52.111 BaseBdev1 00:07:52.111 04:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.111 04:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:52.111 04:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:52.111 04:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.111 04:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.111 BaseBdev2_malloc 00:07:52.111 04:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.111 04:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:52.111 04:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.111 04:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.111 true 00:07:52.111 04:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.111 04:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:52.111 04:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.111 04:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.111 [2024-12-13 04:23:51.994464] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:52.111 [2024-12-13 04:23:51.994523] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:52.111 [2024-12-13 04:23:51.994545] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:52.111 [2024-12-13 04:23:51.994562] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:52.111 [2024-12-13 04:23:51.996893] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:52.111 [2024-12-13 04:23:51.996929] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:52.111 BaseBdev2 00:07:52.111 04:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.111 04:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:52.111 04:23:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.111 04:23:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.111 [2024-12-13 04:23:52.006498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:52.111 [2024-12-13 04:23:52.008645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:52.111 [2024-12-13 04:23:52.008819] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:52.111 [2024-12-13 04:23:52.008832] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:52.111 [2024-12-13 04:23:52.009074] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:07:52.111 [2024-12-13 04:23:52.009234] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:52.111 [2024-12-13 04:23:52.009251] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:52.111 [2024-12-13 04:23:52.009372] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:52.111 04:23:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.111 04:23:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:52.111 04:23:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:52.111 04:23:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:52.111 04:23:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:52.111 04:23:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:52.111 04:23:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:52.111 04:23:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.111 04:23:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.111 04:23:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.111 04:23:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.111 04:23:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.111 04:23:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:52.111 04:23:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.111 04:23:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.111 04:23:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.111 04:23:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.111 "name": "raid_bdev1", 00:07:52.111 "uuid": "8409bddd-f030-4a59-97d8-b00fc32e551e", 00:07:52.111 "strip_size_kb": 64, 00:07:52.111 "state": "online", 00:07:52.111 "raid_level": "concat", 00:07:52.111 "superblock": true, 00:07:52.111 "num_base_bdevs": 2, 00:07:52.111 "num_base_bdevs_discovered": 2, 00:07:52.111 "num_base_bdevs_operational": 2, 00:07:52.111 "base_bdevs_list": [ 00:07:52.111 { 00:07:52.111 "name": "BaseBdev1", 00:07:52.111 "uuid": "1d045243-e6e1-5dbe-94d3-eb567648739d", 00:07:52.111 "is_configured": true, 00:07:52.111 "data_offset": 2048, 00:07:52.111 "data_size": 63488 00:07:52.111 }, 00:07:52.111 { 00:07:52.111 "name": "BaseBdev2", 00:07:52.111 "uuid": "7855fbdc-9fd0-5947-a9f0-ba74976408d0", 00:07:52.111 "is_configured": true, 00:07:52.111 "data_offset": 2048, 00:07:52.111 "data_size": 63488 00:07:52.111 } 00:07:52.111 ] 00:07:52.111 }' 00:07:52.111 04:23:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.111 04:23:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.371 04:23:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:52.371 04:23:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:52.631 [2024-12-13 04:23:52.478064] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:07:53.569 04:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:53.569 04:23:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.569 04:23:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.569 04:23:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.569 04:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:53.569 04:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:53.569 04:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:53.569 04:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:53.569 04:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:53.569 04:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:53.569 04:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:53.569 04:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:53.569 04:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:53.569 04:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.569 04:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.569 04:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.570 04:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.570 04:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.570 04:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:53.570 04:23:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.570 04:23:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.570 04:23:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.570 04:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.570 "name": "raid_bdev1", 00:07:53.570 "uuid": "8409bddd-f030-4a59-97d8-b00fc32e551e", 00:07:53.570 "strip_size_kb": 64, 00:07:53.570 "state": "online", 00:07:53.570 "raid_level": "concat", 00:07:53.570 "superblock": true, 00:07:53.570 "num_base_bdevs": 2, 00:07:53.570 "num_base_bdevs_discovered": 2, 00:07:53.570 "num_base_bdevs_operational": 2, 00:07:53.570 "base_bdevs_list": [ 00:07:53.570 { 00:07:53.570 "name": "BaseBdev1", 00:07:53.570 "uuid": "1d045243-e6e1-5dbe-94d3-eb567648739d", 00:07:53.570 "is_configured": true, 00:07:53.570 "data_offset": 2048, 00:07:53.570 "data_size": 63488 00:07:53.570 }, 00:07:53.570 { 00:07:53.570 "name": "BaseBdev2", 00:07:53.570 "uuid": "7855fbdc-9fd0-5947-a9f0-ba74976408d0", 00:07:53.570 "is_configured": true, 00:07:53.570 "data_offset": 2048, 00:07:53.570 "data_size": 63488 00:07:53.570 } 00:07:53.570 ] 00:07:53.570 }' 00:07:53.570 04:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.570 04:23:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.829 04:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:53.829 04:23:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.829 04:23:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.829 [2024-12-13 04:23:53.838812] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:53.829 [2024-12-13 04:23:53.838862] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:53.829 [2024-12-13 04:23:53.841451] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:53.829 [2024-12-13 04:23:53.841503] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:53.829 [2024-12-13 04:23:53.841555] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:53.829 [2024-12-13 04:23:53.841566] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:54.089 { 00:07:54.089 "results": [ 00:07:54.089 { 00:07:54.089 "job": "raid_bdev1", 00:07:54.089 "core_mask": "0x1", 00:07:54.089 "workload": "randrw", 00:07:54.089 "percentage": 50, 00:07:54.089 "status": "finished", 00:07:54.089 "queue_depth": 1, 00:07:54.089 "io_size": 131072, 00:07:54.089 "runtime": 1.361409, 00:07:54.089 "iops": 15154.887326292099, 00:07:54.089 "mibps": 1894.3609157865124, 00:07:54.089 "io_failed": 1, 00:07:54.089 "io_timeout": 0, 00:07:54.089 "avg_latency_us": 92.16273231692902, 00:07:54.089 "min_latency_us": 25.6, 00:07:54.089 "max_latency_us": 1366.5257641921398 00:07:54.089 } 00:07:54.089 ], 00:07:54.089 "core_count": 1 00:07:54.089 } 00:07:54.089 04:23:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.089 04:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75512 00:07:54.089 04:23:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75512 ']' 00:07:54.089 04:23:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75512 00:07:54.089 04:23:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:54.089 04:23:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:54.089 04:23:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75512 00:07:54.089 04:23:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:54.089 04:23:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:54.089 04:23:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75512' 00:07:54.089 killing process with pid 75512 00:07:54.089 04:23:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75512 00:07:54.089 [2024-12-13 04:23:53.888229] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:54.089 04:23:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75512 00:07:54.089 [2024-12-13 04:23:53.917398] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:54.349 04:23:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.JSj8b8gfFF 00:07:54.349 04:23:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:54.349 04:23:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:54.349 04:23:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:07:54.349 04:23:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:54.349 04:23:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:54.349 04:23:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:54.349 04:23:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:07:54.349 00:07:54.349 real 0m3.281s 00:07:54.349 user 0m4.034s 00:07:54.349 sys 0m0.589s 00:07:54.349 04:23:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.349 ************************************ 00:07:54.349 END TEST raid_write_error_test 00:07:54.349 ************************************ 00:07:54.349 04:23:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.349 04:23:54 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:54.349 04:23:54 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:07:54.349 04:23:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:54.349 04:23:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.349 04:23:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:54.349 ************************************ 00:07:54.349 START TEST raid_state_function_test 00:07:54.349 ************************************ 00:07:54.349 04:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:07:54.350 04:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:54.350 04:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:54.350 04:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:54.350 04:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:54.350 04:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:54.350 04:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:54.350 04:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:54.350 04:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:54.350 04:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:54.350 04:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:54.350 04:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:54.350 04:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:54.350 04:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:54.350 04:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:54.350 04:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:54.350 04:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:54.350 04:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:54.350 04:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:54.350 04:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:54.350 04:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:54.350 04:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:54.350 04:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:54.350 04:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=75645 00:07:54.350 04:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:54.350 04:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75645' 00:07:54.350 Process raid pid: 75645 00:07:54.350 04:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 75645 00:07:54.350 04:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 75645 ']' 00:07:54.350 04:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.350 04:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:54.350 04:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.350 04:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:54.350 04:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.610 [2024-12-13 04:23:54.415519] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:54.610 [2024-12-13 04:23:54.415713] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:54.610 [2024-12-13 04:23:54.572558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.610 [2024-12-13 04:23:54.611812] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.870 [2024-12-13 04:23:54.687681] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:54.870 [2024-12-13 04:23:54.687730] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:55.440 04:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:55.440 04:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:55.440 04:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:55.440 04:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.440 04:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.440 [2024-12-13 04:23:55.257818] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:55.440 [2024-12-13 04:23:55.257951] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:55.440 [2024-12-13 04:23:55.257966] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:55.440 [2024-12-13 04:23:55.257979] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:55.440 04:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.440 04:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:55.440 04:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:55.440 04:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:55.440 04:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:55.440 04:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:55.440 04:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:55.440 04:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.440 04:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.440 04:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.440 04:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.440 04:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.440 04:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:55.440 04:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.440 04:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.440 04:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.440 04:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.440 "name": "Existed_Raid", 00:07:55.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.440 "strip_size_kb": 0, 00:07:55.440 "state": "configuring", 00:07:55.440 "raid_level": "raid1", 00:07:55.440 "superblock": false, 00:07:55.440 "num_base_bdevs": 2, 00:07:55.440 "num_base_bdevs_discovered": 0, 00:07:55.440 "num_base_bdevs_operational": 2, 00:07:55.440 "base_bdevs_list": [ 00:07:55.440 { 00:07:55.440 "name": "BaseBdev1", 00:07:55.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.440 "is_configured": false, 00:07:55.440 "data_offset": 0, 00:07:55.440 "data_size": 0 00:07:55.440 }, 00:07:55.440 { 00:07:55.440 "name": "BaseBdev2", 00:07:55.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.440 "is_configured": false, 00:07:55.440 "data_offset": 0, 00:07:55.440 "data_size": 0 00:07:55.440 } 00:07:55.440 ] 00:07:55.440 }' 00:07:55.440 04:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.440 04:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.700 04:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:55.700 04:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.700 04:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.700 [2024-12-13 04:23:55.701169] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:55.700 [2024-12-13 04:23:55.701288] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:55.700 04:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.700 04:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:55.700 04:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.700 04:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.700 [2024-12-13 04:23:55.712982] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:55.700 [2024-12-13 04:23:55.713091] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:55.700 [2024-12-13 04:23:55.713119] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:55.700 [2024-12-13 04:23:55.713156] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:55.960 04:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.960 04:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:55.960 04:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.960 04:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.960 [2024-12-13 04:23:55.740433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:55.960 BaseBdev1 00:07:55.960 04:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.961 04:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:55.961 04:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:55.961 04:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:55.961 04:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:55.961 04:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:55.961 04:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:55.961 04:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:55.961 04:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.961 04:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.961 04:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.961 04:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:55.961 04:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.961 04:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.961 [ 00:07:55.961 { 00:07:55.961 "name": "BaseBdev1", 00:07:55.961 "aliases": [ 00:07:55.961 "669fafdf-1b8e-4915-b7db-f3371b0fb44a" 00:07:55.961 ], 00:07:55.961 "product_name": "Malloc disk", 00:07:55.961 "block_size": 512, 00:07:55.961 "num_blocks": 65536, 00:07:55.961 "uuid": "669fafdf-1b8e-4915-b7db-f3371b0fb44a", 00:07:55.961 "assigned_rate_limits": { 00:07:55.961 "rw_ios_per_sec": 0, 00:07:55.961 "rw_mbytes_per_sec": 0, 00:07:55.961 "r_mbytes_per_sec": 0, 00:07:55.961 "w_mbytes_per_sec": 0 00:07:55.961 }, 00:07:55.961 "claimed": true, 00:07:55.961 "claim_type": "exclusive_write", 00:07:55.961 "zoned": false, 00:07:55.961 "supported_io_types": { 00:07:55.961 "read": true, 00:07:55.961 "write": true, 00:07:55.961 "unmap": true, 00:07:55.961 "flush": true, 00:07:55.961 "reset": true, 00:07:55.961 "nvme_admin": false, 00:07:55.961 "nvme_io": false, 00:07:55.961 "nvme_io_md": false, 00:07:55.961 "write_zeroes": true, 00:07:55.961 "zcopy": true, 00:07:55.961 "get_zone_info": false, 00:07:55.961 "zone_management": false, 00:07:55.961 "zone_append": false, 00:07:55.961 "compare": false, 00:07:55.961 "compare_and_write": false, 00:07:55.961 "abort": true, 00:07:55.961 "seek_hole": false, 00:07:55.961 "seek_data": false, 00:07:55.961 "copy": true, 00:07:55.961 "nvme_iov_md": false 00:07:55.961 }, 00:07:55.961 "memory_domains": [ 00:07:55.961 { 00:07:55.961 "dma_device_id": "system", 00:07:55.961 "dma_device_type": 1 00:07:55.961 }, 00:07:55.961 { 00:07:55.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.961 "dma_device_type": 2 00:07:55.961 } 00:07:55.961 ], 00:07:55.961 "driver_specific": {} 00:07:55.961 } 00:07:55.961 ] 00:07:55.961 04:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.961 04:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:55.961 04:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:55.961 04:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:55.961 04:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:55.961 04:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:55.961 04:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:55.961 04:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:55.961 04:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.961 04:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.961 04:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.961 04:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.961 04:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.961 04:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:55.961 04:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.961 04:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.961 04:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.961 04:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.961 "name": "Existed_Raid", 00:07:55.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.961 "strip_size_kb": 0, 00:07:55.961 "state": "configuring", 00:07:55.961 "raid_level": "raid1", 00:07:55.961 "superblock": false, 00:07:55.961 "num_base_bdevs": 2, 00:07:55.961 "num_base_bdevs_discovered": 1, 00:07:55.961 "num_base_bdevs_operational": 2, 00:07:55.961 "base_bdevs_list": [ 00:07:55.961 { 00:07:55.961 "name": "BaseBdev1", 00:07:55.961 "uuid": "669fafdf-1b8e-4915-b7db-f3371b0fb44a", 00:07:55.961 "is_configured": true, 00:07:55.961 "data_offset": 0, 00:07:55.961 "data_size": 65536 00:07:55.961 }, 00:07:55.961 { 00:07:55.961 "name": "BaseBdev2", 00:07:55.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.961 "is_configured": false, 00:07:55.961 "data_offset": 0, 00:07:55.961 "data_size": 0 00:07:55.961 } 00:07:55.961 ] 00:07:55.961 }' 00:07:55.961 04:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.961 04:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.224 04:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:56.224 04:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.224 04:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.224 [2024-12-13 04:23:56.215650] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:56.224 [2024-12-13 04:23:56.215719] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:56.224 04:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.224 04:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:56.224 04:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.224 04:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.224 [2024-12-13 04:23:56.223632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:56.224 [2024-12-13 04:23:56.225782] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:56.224 [2024-12-13 04:23:56.225820] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:56.224 04:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.224 04:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:56.224 04:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:56.224 04:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:56.224 04:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:56.224 04:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:56.224 04:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:56.224 04:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:56.224 04:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:56.224 04:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.224 04:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.224 04:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.224 04:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.224 04:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.224 04:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:56.224 04:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.224 04:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.488 04:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.488 04:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.488 "name": "Existed_Raid", 00:07:56.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.488 "strip_size_kb": 0, 00:07:56.488 "state": "configuring", 00:07:56.488 "raid_level": "raid1", 00:07:56.488 "superblock": false, 00:07:56.488 "num_base_bdevs": 2, 00:07:56.488 "num_base_bdevs_discovered": 1, 00:07:56.488 "num_base_bdevs_operational": 2, 00:07:56.488 "base_bdevs_list": [ 00:07:56.488 { 00:07:56.488 "name": "BaseBdev1", 00:07:56.488 "uuid": "669fafdf-1b8e-4915-b7db-f3371b0fb44a", 00:07:56.488 "is_configured": true, 00:07:56.488 "data_offset": 0, 00:07:56.488 "data_size": 65536 00:07:56.488 }, 00:07:56.488 { 00:07:56.488 "name": "BaseBdev2", 00:07:56.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.488 "is_configured": false, 00:07:56.488 "data_offset": 0, 00:07:56.488 "data_size": 0 00:07:56.488 } 00:07:56.488 ] 00:07:56.488 }' 00:07:56.488 04:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.488 04:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.748 04:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:56.748 04:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.748 04:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.748 [2024-12-13 04:23:56.691540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:56.748 [2024-12-13 04:23:56.691597] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:56.748 [2024-12-13 04:23:56.691605] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:56.748 [2024-12-13 04:23:56.691919] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:56.748 [2024-12-13 04:23:56.692077] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:56.748 [2024-12-13 04:23:56.692098] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:56.748 [2024-12-13 04:23:56.692314] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:56.748 BaseBdev2 00:07:56.748 04:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.748 04:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:56.748 04:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:56.748 04:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:56.748 04:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:56.748 04:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:56.748 04:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:56.748 04:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:56.748 04:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.748 04:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.748 04:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.748 04:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:56.748 04:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.748 04:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.748 [ 00:07:56.748 { 00:07:56.748 "name": "BaseBdev2", 00:07:56.748 "aliases": [ 00:07:56.748 "1d7fd1aa-a2d6-4c22-8614-003638e76e9f" 00:07:56.748 ], 00:07:56.748 "product_name": "Malloc disk", 00:07:56.748 "block_size": 512, 00:07:56.748 "num_blocks": 65536, 00:07:56.748 "uuid": "1d7fd1aa-a2d6-4c22-8614-003638e76e9f", 00:07:56.748 "assigned_rate_limits": { 00:07:56.748 "rw_ios_per_sec": 0, 00:07:56.748 "rw_mbytes_per_sec": 0, 00:07:56.748 "r_mbytes_per_sec": 0, 00:07:56.748 "w_mbytes_per_sec": 0 00:07:56.748 }, 00:07:56.748 "claimed": true, 00:07:56.748 "claim_type": "exclusive_write", 00:07:56.748 "zoned": false, 00:07:56.748 "supported_io_types": { 00:07:56.748 "read": true, 00:07:56.748 "write": true, 00:07:56.748 "unmap": true, 00:07:56.748 "flush": true, 00:07:56.748 "reset": true, 00:07:56.748 "nvme_admin": false, 00:07:56.748 "nvme_io": false, 00:07:56.748 "nvme_io_md": false, 00:07:56.748 "write_zeroes": true, 00:07:56.748 "zcopy": true, 00:07:56.748 "get_zone_info": false, 00:07:56.748 "zone_management": false, 00:07:56.748 "zone_append": false, 00:07:56.748 "compare": false, 00:07:56.748 "compare_and_write": false, 00:07:56.748 "abort": true, 00:07:56.748 "seek_hole": false, 00:07:56.748 "seek_data": false, 00:07:56.748 "copy": true, 00:07:56.748 "nvme_iov_md": false 00:07:56.748 }, 00:07:56.748 "memory_domains": [ 00:07:56.748 { 00:07:56.748 "dma_device_id": "system", 00:07:56.748 "dma_device_type": 1 00:07:56.748 }, 00:07:56.748 { 00:07:56.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.748 "dma_device_type": 2 00:07:56.748 } 00:07:56.748 ], 00:07:56.748 "driver_specific": {} 00:07:56.748 } 00:07:56.748 ] 00:07:56.748 04:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.748 04:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:56.748 04:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:56.748 04:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:56.748 04:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:56.748 04:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:56.748 04:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:56.748 04:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:56.749 04:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:56.749 04:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:56.749 04:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.749 04:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.749 04:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.749 04:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.749 04:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.749 04:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.749 04:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.749 04:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:56.749 04:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.009 04:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.009 "name": "Existed_Raid", 00:07:57.009 "uuid": "2e270457-8174-4855-b9b1-a8e2fb7b17de", 00:07:57.009 "strip_size_kb": 0, 00:07:57.009 "state": "online", 00:07:57.009 "raid_level": "raid1", 00:07:57.009 "superblock": false, 00:07:57.009 "num_base_bdevs": 2, 00:07:57.009 "num_base_bdevs_discovered": 2, 00:07:57.009 "num_base_bdevs_operational": 2, 00:07:57.009 "base_bdevs_list": [ 00:07:57.009 { 00:07:57.009 "name": "BaseBdev1", 00:07:57.009 "uuid": "669fafdf-1b8e-4915-b7db-f3371b0fb44a", 00:07:57.009 "is_configured": true, 00:07:57.009 "data_offset": 0, 00:07:57.009 "data_size": 65536 00:07:57.009 }, 00:07:57.009 { 00:07:57.009 "name": "BaseBdev2", 00:07:57.009 "uuid": "1d7fd1aa-a2d6-4c22-8614-003638e76e9f", 00:07:57.009 "is_configured": true, 00:07:57.009 "data_offset": 0, 00:07:57.009 "data_size": 65536 00:07:57.009 } 00:07:57.009 ] 00:07:57.009 }' 00:07:57.009 04:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.009 04:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.269 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:57.269 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:57.269 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:57.269 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:57.269 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:57.269 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:57.269 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:57.269 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:57.269 04:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.269 04:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.269 [2024-12-13 04:23:57.178952] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:57.269 04:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.269 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:57.269 "name": "Existed_Raid", 00:07:57.269 "aliases": [ 00:07:57.269 "2e270457-8174-4855-b9b1-a8e2fb7b17de" 00:07:57.269 ], 00:07:57.269 "product_name": "Raid Volume", 00:07:57.269 "block_size": 512, 00:07:57.269 "num_blocks": 65536, 00:07:57.269 "uuid": "2e270457-8174-4855-b9b1-a8e2fb7b17de", 00:07:57.269 "assigned_rate_limits": { 00:07:57.269 "rw_ios_per_sec": 0, 00:07:57.269 "rw_mbytes_per_sec": 0, 00:07:57.269 "r_mbytes_per_sec": 0, 00:07:57.269 "w_mbytes_per_sec": 0 00:07:57.269 }, 00:07:57.269 "claimed": false, 00:07:57.269 "zoned": false, 00:07:57.269 "supported_io_types": { 00:07:57.269 "read": true, 00:07:57.269 "write": true, 00:07:57.269 "unmap": false, 00:07:57.269 "flush": false, 00:07:57.269 "reset": true, 00:07:57.269 "nvme_admin": false, 00:07:57.269 "nvme_io": false, 00:07:57.269 "nvme_io_md": false, 00:07:57.269 "write_zeroes": true, 00:07:57.269 "zcopy": false, 00:07:57.269 "get_zone_info": false, 00:07:57.269 "zone_management": false, 00:07:57.269 "zone_append": false, 00:07:57.269 "compare": false, 00:07:57.269 "compare_and_write": false, 00:07:57.269 "abort": false, 00:07:57.269 "seek_hole": false, 00:07:57.269 "seek_data": false, 00:07:57.269 "copy": false, 00:07:57.269 "nvme_iov_md": false 00:07:57.269 }, 00:07:57.269 "memory_domains": [ 00:07:57.269 { 00:07:57.269 "dma_device_id": "system", 00:07:57.269 "dma_device_type": 1 00:07:57.269 }, 00:07:57.269 { 00:07:57.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.269 "dma_device_type": 2 00:07:57.269 }, 00:07:57.269 { 00:07:57.269 "dma_device_id": "system", 00:07:57.269 "dma_device_type": 1 00:07:57.269 }, 00:07:57.269 { 00:07:57.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.269 "dma_device_type": 2 00:07:57.269 } 00:07:57.269 ], 00:07:57.269 "driver_specific": { 00:07:57.269 "raid": { 00:07:57.269 "uuid": "2e270457-8174-4855-b9b1-a8e2fb7b17de", 00:07:57.269 "strip_size_kb": 0, 00:07:57.269 "state": "online", 00:07:57.269 "raid_level": "raid1", 00:07:57.269 "superblock": false, 00:07:57.269 "num_base_bdevs": 2, 00:07:57.269 "num_base_bdevs_discovered": 2, 00:07:57.269 "num_base_bdevs_operational": 2, 00:07:57.269 "base_bdevs_list": [ 00:07:57.269 { 00:07:57.269 "name": "BaseBdev1", 00:07:57.269 "uuid": "669fafdf-1b8e-4915-b7db-f3371b0fb44a", 00:07:57.269 "is_configured": true, 00:07:57.269 "data_offset": 0, 00:07:57.269 "data_size": 65536 00:07:57.269 }, 00:07:57.269 { 00:07:57.269 "name": "BaseBdev2", 00:07:57.269 "uuid": "1d7fd1aa-a2d6-4c22-8614-003638e76e9f", 00:07:57.269 "is_configured": true, 00:07:57.270 "data_offset": 0, 00:07:57.270 "data_size": 65536 00:07:57.270 } 00:07:57.270 ] 00:07:57.270 } 00:07:57.270 } 00:07:57.270 }' 00:07:57.270 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:57.270 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:57.270 BaseBdev2' 00:07:57.270 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.529 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:57.529 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:57.529 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.529 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:57.529 04:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.530 04:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.530 04:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.530 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:57.530 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:57.530 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:57.530 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:57.530 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.530 04:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.530 04:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.530 04:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.530 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:57.530 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:57.530 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:57.530 04:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.530 04:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.530 [2024-12-13 04:23:57.390476] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:57.530 04:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.530 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:57.530 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:57.530 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:57.530 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:57.530 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:57.530 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:57.530 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:57.530 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:57.530 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:57.530 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:57.530 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:57.530 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.530 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.530 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.530 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.530 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.530 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.530 04:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.530 04:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.530 04:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.530 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.530 "name": "Existed_Raid", 00:07:57.530 "uuid": "2e270457-8174-4855-b9b1-a8e2fb7b17de", 00:07:57.530 "strip_size_kb": 0, 00:07:57.530 "state": "online", 00:07:57.530 "raid_level": "raid1", 00:07:57.530 "superblock": false, 00:07:57.530 "num_base_bdevs": 2, 00:07:57.530 "num_base_bdevs_discovered": 1, 00:07:57.530 "num_base_bdevs_operational": 1, 00:07:57.530 "base_bdevs_list": [ 00:07:57.530 { 00:07:57.530 "name": null, 00:07:57.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.530 "is_configured": false, 00:07:57.530 "data_offset": 0, 00:07:57.530 "data_size": 65536 00:07:57.530 }, 00:07:57.530 { 00:07:57.530 "name": "BaseBdev2", 00:07:57.530 "uuid": "1d7fd1aa-a2d6-4c22-8614-003638e76e9f", 00:07:57.530 "is_configured": true, 00:07:57.530 "data_offset": 0, 00:07:57.530 "data_size": 65536 00:07:57.530 } 00:07:57.530 ] 00:07:57.530 }' 00:07:57.530 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.530 04:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.790 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:57.790 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:57.790 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:57.790 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.790 04:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.790 04:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.790 04:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.790 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:57.790 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:57.790 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:57.790 04:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.790 04:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.790 [2024-12-13 04:23:57.786466] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:57.790 [2024-12-13 04:23:57.786638] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:58.049 [2024-12-13 04:23:57.807555] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:58.049 [2024-12-13 04:23:57.807692] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:58.049 [2024-12-13 04:23:57.807741] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:58.049 04:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.049 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:58.049 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:58.049 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.049 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:58.049 04:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.049 04:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.049 04:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.049 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:58.049 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:58.049 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:58.049 04:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 75645 00:07:58.049 04:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 75645 ']' 00:07:58.049 04:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 75645 00:07:58.049 04:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:58.049 04:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:58.049 04:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75645 00:07:58.049 killing process with pid 75645 00:07:58.049 04:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:58.049 04:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:58.050 04:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75645' 00:07:58.050 04:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 75645 00:07:58.050 [2024-12-13 04:23:57.896876] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:58.050 04:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 75645 00:07:58.050 [2024-12-13 04:23:57.898449] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:58.310 04:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:58.310 00:07:58.310 real 0m3.904s 00:07:58.310 user 0m5.959s 00:07:58.310 sys 0m0.881s 00:07:58.310 ************************************ 00:07:58.310 END TEST raid_state_function_test 00:07:58.310 ************************************ 00:07:58.310 04:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.310 04:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.310 04:23:58 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:07:58.310 04:23:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:58.310 04:23:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.310 04:23:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:58.310 ************************************ 00:07:58.310 START TEST raid_state_function_test_sb 00:07:58.310 ************************************ 00:07:58.310 04:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:07:58.310 04:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:58.310 04:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:58.310 04:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:58.310 04:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:58.310 04:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:58.310 04:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:58.310 04:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:58.310 04:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:58.310 04:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:58.310 04:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:58.310 04:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:58.310 04:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:58.310 04:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:58.310 04:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:58.310 04:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:58.310 04:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:58.310 04:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:58.310 04:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:58.310 04:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:58.310 04:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:58.310 04:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:58.310 04:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:58.310 Process raid pid: 75876 00:07:58.310 04:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=75876 00:07:58.310 04:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:58.310 04:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75876' 00:07:58.310 04:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 75876 00:07:58.310 04:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75876 ']' 00:07:58.310 04:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.310 04:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:58.310 04:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.310 04:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:58.310 04:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.570 [2024-12-13 04:23:58.393978] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:58.570 [2024-12-13 04:23:58.394139] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:58.570 [2024-12-13 04:23:58.529739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.570 [2024-12-13 04:23:58.569621] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.830 [2024-12-13 04:23:58.646224] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:58.830 [2024-12-13 04:23:58.646374] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:59.400 04:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:59.400 04:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:59.400 04:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:59.400 04:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.400 04:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.400 [2024-12-13 04:23:59.232833] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:59.400 [2024-12-13 04:23:59.232966] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:59.400 [2024-12-13 04:23:59.233000] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:59.400 [2024-12-13 04:23:59.233027] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:59.400 04:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.400 04:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:59.400 04:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:59.400 04:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:59.400 04:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:59.400 04:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:59.400 04:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:59.400 04:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.400 04:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.400 04:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.400 04:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.400 04:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.400 04:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.400 04:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.400 04:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:59.400 04:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.400 04:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.400 "name": "Existed_Raid", 00:07:59.400 "uuid": "4614d3fb-6bae-4335-8172-b9090713141f", 00:07:59.400 "strip_size_kb": 0, 00:07:59.400 "state": "configuring", 00:07:59.400 "raid_level": "raid1", 00:07:59.400 "superblock": true, 00:07:59.401 "num_base_bdevs": 2, 00:07:59.401 "num_base_bdevs_discovered": 0, 00:07:59.401 "num_base_bdevs_operational": 2, 00:07:59.401 "base_bdevs_list": [ 00:07:59.401 { 00:07:59.401 "name": "BaseBdev1", 00:07:59.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.401 "is_configured": false, 00:07:59.401 "data_offset": 0, 00:07:59.401 "data_size": 0 00:07:59.401 }, 00:07:59.401 { 00:07:59.401 "name": "BaseBdev2", 00:07:59.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.401 "is_configured": false, 00:07:59.401 "data_offset": 0, 00:07:59.401 "data_size": 0 00:07:59.401 } 00:07:59.401 ] 00:07:59.401 }' 00:07:59.401 04:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.401 04:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.972 04:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:59.972 04:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.972 04:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.972 [2024-12-13 04:23:59.711920] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:59.972 [2024-12-13 04:23:59.711958] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:59.972 04:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.972 04:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:59.972 04:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.972 04:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.972 [2024-12-13 04:23:59.719911] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:59.972 [2024-12-13 04:23:59.719951] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:59.972 [2024-12-13 04:23:59.719959] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:59.972 [2024-12-13 04:23:59.719979] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:59.972 04:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.972 04:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:59.972 04:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.972 04:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.972 [2024-12-13 04:23:59.746886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:59.972 BaseBdev1 00:07:59.972 04:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.972 04:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:59.972 04:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:59.972 04:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:59.972 04:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:59.972 04:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:59.972 04:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:59.972 04:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:59.972 04:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.972 04:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.972 04:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.972 04:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:59.972 04:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.972 04:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.972 [ 00:07:59.972 { 00:07:59.972 "name": "BaseBdev1", 00:07:59.972 "aliases": [ 00:07:59.972 "26f3c557-04a2-48f2-8483-9c68042d4f44" 00:07:59.972 ], 00:07:59.972 "product_name": "Malloc disk", 00:07:59.972 "block_size": 512, 00:07:59.972 "num_blocks": 65536, 00:07:59.972 "uuid": "26f3c557-04a2-48f2-8483-9c68042d4f44", 00:07:59.972 "assigned_rate_limits": { 00:07:59.972 "rw_ios_per_sec": 0, 00:07:59.972 "rw_mbytes_per_sec": 0, 00:07:59.972 "r_mbytes_per_sec": 0, 00:07:59.972 "w_mbytes_per_sec": 0 00:07:59.972 }, 00:07:59.972 "claimed": true, 00:07:59.972 "claim_type": "exclusive_write", 00:07:59.972 "zoned": false, 00:07:59.972 "supported_io_types": { 00:07:59.972 "read": true, 00:07:59.972 "write": true, 00:07:59.972 "unmap": true, 00:07:59.972 "flush": true, 00:07:59.972 "reset": true, 00:07:59.972 "nvme_admin": false, 00:07:59.972 "nvme_io": false, 00:07:59.972 "nvme_io_md": false, 00:07:59.972 "write_zeroes": true, 00:07:59.972 "zcopy": true, 00:07:59.972 "get_zone_info": false, 00:07:59.972 "zone_management": false, 00:07:59.972 "zone_append": false, 00:07:59.972 "compare": false, 00:07:59.972 "compare_and_write": false, 00:07:59.972 "abort": true, 00:07:59.972 "seek_hole": false, 00:07:59.972 "seek_data": false, 00:07:59.972 "copy": true, 00:07:59.972 "nvme_iov_md": false 00:07:59.972 }, 00:07:59.972 "memory_domains": [ 00:07:59.972 { 00:07:59.972 "dma_device_id": "system", 00:07:59.972 "dma_device_type": 1 00:07:59.972 }, 00:07:59.972 { 00:07:59.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.972 "dma_device_type": 2 00:07:59.972 } 00:07:59.972 ], 00:07:59.972 "driver_specific": {} 00:07:59.972 } 00:07:59.972 ] 00:07:59.972 04:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.972 04:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:59.972 04:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:59.972 04:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:59.972 04:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:59.972 04:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:59.972 04:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:59.972 04:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:59.972 04:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.972 04:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.972 04:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.972 04:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.972 04:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.972 04:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:59.972 04:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.972 04:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.972 04:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.972 04:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.972 "name": "Existed_Raid", 00:07:59.972 "uuid": "9d4fe2db-4095-4af4-ae99-1d1d53fdcc6b", 00:07:59.972 "strip_size_kb": 0, 00:07:59.972 "state": "configuring", 00:07:59.972 "raid_level": "raid1", 00:07:59.972 "superblock": true, 00:07:59.972 "num_base_bdevs": 2, 00:07:59.972 "num_base_bdevs_discovered": 1, 00:07:59.972 "num_base_bdevs_operational": 2, 00:07:59.972 "base_bdevs_list": [ 00:07:59.972 { 00:07:59.972 "name": "BaseBdev1", 00:07:59.972 "uuid": "26f3c557-04a2-48f2-8483-9c68042d4f44", 00:07:59.972 "is_configured": true, 00:07:59.972 "data_offset": 2048, 00:07:59.972 "data_size": 63488 00:07:59.972 }, 00:07:59.972 { 00:07:59.972 "name": "BaseBdev2", 00:07:59.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.972 "is_configured": false, 00:07:59.972 "data_offset": 0, 00:07:59.972 "data_size": 0 00:07:59.972 } 00:07:59.972 ] 00:07:59.972 }' 00:07:59.972 04:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.972 04:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.232 04:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:00.233 04:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.233 04:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.233 [2024-12-13 04:24:00.234075] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:00.233 [2024-12-13 04:24:00.234186] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:08:00.233 04:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.233 04:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:00.233 04:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.233 04:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.233 [2024-12-13 04:24:00.242086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:00.233 [2024-12-13 04:24:00.244177] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:00.233 [2024-12-13 04:24:00.244215] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:00.493 04:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.493 04:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:00.493 04:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:00.493 04:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:00.493 04:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:00.493 04:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:00.493 04:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:00.493 04:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:00.493 04:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:00.493 04:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.493 04:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.493 04:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.493 04:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.493 04:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.493 04:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.493 04:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.493 04:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.493 04:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.493 04:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.493 "name": "Existed_Raid", 00:08:00.493 "uuid": "fb1033de-46fa-4d37-a3e3-384e06c0bde4", 00:08:00.493 "strip_size_kb": 0, 00:08:00.493 "state": "configuring", 00:08:00.493 "raid_level": "raid1", 00:08:00.493 "superblock": true, 00:08:00.493 "num_base_bdevs": 2, 00:08:00.493 "num_base_bdevs_discovered": 1, 00:08:00.493 "num_base_bdevs_operational": 2, 00:08:00.493 "base_bdevs_list": [ 00:08:00.493 { 00:08:00.493 "name": "BaseBdev1", 00:08:00.493 "uuid": "26f3c557-04a2-48f2-8483-9c68042d4f44", 00:08:00.493 "is_configured": true, 00:08:00.493 "data_offset": 2048, 00:08:00.493 "data_size": 63488 00:08:00.493 }, 00:08:00.493 { 00:08:00.493 "name": "BaseBdev2", 00:08:00.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.493 "is_configured": false, 00:08:00.493 "data_offset": 0, 00:08:00.493 "data_size": 0 00:08:00.493 } 00:08:00.493 ] 00:08:00.493 }' 00:08:00.493 04:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.493 04:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.753 04:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:00.753 04:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.753 04:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.753 [2024-12-13 04:24:00.646129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:00.753 [2024-12-13 04:24:00.646456] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:00.753 [2024-12-13 04:24:00.646517] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:00.753 [2024-12-13 04:24:00.646833] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:08:00.753 BaseBdev2 00:08:00.753 [2024-12-13 04:24:00.647041] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:00.753 [2024-12-13 04:24:00.647064] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:08:00.753 [2024-12-13 04:24:00.647196] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:00.753 04:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.753 04:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:00.753 04:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:00.753 04:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:00.753 04:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:00.753 04:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:00.753 04:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:00.753 04:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:00.753 04:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.753 04:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.753 04:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.753 04:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:00.753 04:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.753 04:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.753 [ 00:08:00.753 { 00:08:00.753 "name": "BaseBdev2", 00:08:00.753 "aliases": [ 00:08:00.753 "5261d522-1a62-4659-b810-e1f782497876" 00:08:00.753 ], 00:08:00.753 "product_name": "Malloc disk", 00:08:00.753 "block_size": 512, 00:08:00.753 "num_blocks": 65536, 00:08:00.753 "uuid": "5261d522-1a62-4659-b810-e1f782497876", 00:08:00.753 "assigned_rate_limits": { 00:08:00.753 "rw_ios_per_sec": 0, 00:08:00.753 "rw_mbytes_per_sec": 0, 00:08:00.753 "r_mbytes_per_sec": 0, 00:08:00.753 "w_mbytes_per_sec": 0 00:08:00.753 }, 00:08:00.753 "claimed": true, 00:08:00.753 "claim_type": "exclusive_write", 00:08:00.753 "zoned": false, 00:08:00.753 "supported_io_types": { 00:08:00.753 "read": true, 00:08:00.753 "write": true, 00:08:00.753 "unmap": true, 00:08:00.753 "flush": true, 00:08:00.753 "reset": true, 00:08:00.753 "nvme_admin": false, 00:08:00.753 "nvme_io": false, 00:08:00.753 "nvme_io_md": false, 00:08:00.753 "write_zeroes": true, 00:08:00.753 "zcopy": true, 00:08:00.754 "get_zone_info": false, 00:08:00.754 "zone_management": false, 00:08:00.754 "zone_append": false, 00:08:00.754 "compare": false, 00:08:00.754 "compare_and_write": false, 00:08:00.754 "abort": true, 00:08:00.754 "seek_hole": false, 00:08:00.754 "seek_data": false, 00:08:00.754 "copy": true, 00:08:00.754 "nvme_iov_md": false 00:08:00.754 }, 00:08:00.754 "memory_domains": [ 00:08:00.754 { 00:08:00.754 "dma_device_id": "system", 00:08:00.754 "dma_device_type": 1 00:08:00.754 }, 00:08:00.754 { 00:08:00.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.754 "dma_device_type": 2 00:08:00.754 } 00:08:00.754 ], 00:08:00.754 "driver_specific": {} 00:08:00.754 } 00:08:00.754 ] 00:08:00.754 04:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.754 04:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:00.754 04:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:00.754 04:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:00.754 04:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:00.754 04:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:00.754 04:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:00.754 04:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:00.754 04:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:00.754 04:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:00.754 04:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.754 04:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.754 04:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.754 04:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.754 04:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.754 04:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.754 04:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.754 04:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.754 04:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.754 04:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.754 "name": "Existed_Raid", 00:08:00.754 "uuid": "fb1033de-46fa-4d37-a3e3-384e06c0bde4", 00:08:00.754 "strip_size_kb": 0, 00:08:00.754 "state": "online", 00:08:00.754 "raid_level": "raid1", 00:08:00.754 "superblock": true, 00:08:00.754 "num_base_bdevs": 2, 00:08:00.754 "num_base_bdevs_discovered": 2, 00:08:00.754 "num_base_bdevs_operational": 2, 00:08:00.754 "base_bdevs_list": [ 00:08:00.754 { 00:08:00.754 "name": "BaseBdev1", 00:08:00.754 "uuid": "26f3c557-04a2-48f2-8483-9c68042d4f44", 00:08:00.754 "is_configured": true, 00:08:00.754 "data_offset": 2048, 00:08:00.754 "data_size": 63488 00:08:00.754 }, 00:08:00.754 { 00:08:00.754 "name": "BaseBdev2", 00:08:00.754 "uuid": "5261d522-1a62-4659-b810-e1f782497876", 00:08:00.754 "is_configured": true, 00:08:00.754 "data_offset": 2048, 00:08:00.754 "data_size": 63488 00:08:00.754 } 00:08:00.754 ] 00:08:00.754 }' 00:08:00.754 04:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.754 04:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.324 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:01.324 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:01.324 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:01.324 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:01.324 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:01.324 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:01.324 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:01.324 04:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.324 04:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.324 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:01.324 [2024-12-13 04:24:01.097691] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:01.324 04:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.324 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:01.324 "name": "Existed_Raid", 00:08:01.324 "aliases": [ 00:08:01.324 "fb1033de-46fa-4d37-a3e3-384e06c0bde4" 00:08:01.324 ], 00:08:01.324 "product_name": "Raid Volume", 00:08:01.324 "block_size": 512, 00:08:01.324 "num_blocks": 63488, 00:08:01.324 "uuid": "fb1033de-46fa-4d37-a3e3-384e06c0bde4", 00:08:01.324 "assigned_rate_limits": { 00:08:01.324 "rw_ios_per_sec": 0, 00:08:01.324 "rw_mbytes_per_sec": 0, 00:08:01.324 "r_mbytes_per_sec": 0, 00:08:01.324 "w_mbytes_per_sec": 0 00:08:01.324 }, 00:08:01.324 "claimed": false, 00:08:01.324 "zoned": false, 00:08:01.324 "supported_io_types": { 00:08:01.324 "read": true, 00:08:01.324 "write": true, 00:08:01.324 "unmap": false, 00:08:01.324 "flush": false, 00:08:01.324 "reset": true, 00:08:01.324 "nvme_admin": false, 00:08:01.324 "nvme_io": false, 00:08:01.324 "nvme_io_md": false, 00:08:01.324 "write_zeroes": true, 00:08:01.324 "zcopy": false, 00:08:01.324 "get_zone_info": false, 00:08:01.324 "zone_management": false, 00:08:01.324 "zone_append": false, 00:08:01.324 "compare": false, 00:08:01.324 "compare_and_write": false, 00:08:01.324 "abort": false, 00:08:01.324 "seek_hole": false, 00:08:01.324 "seek_data": false, 00:08:01.324 "copy": false, 00:08:01.324 "nvme_iov_md": false 00:08:01.324 }, 00:08:01.324 "memory_domains": [ 00:08:01.324 { 00:08:01.324 "dma_device_id": "system", 00:08:01.324 "dma_device_type": 1 00:08:01.324 }, 00:08:01.324 { 00:08:01.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.324 "dma_device_type": 2 00:08:01.324 }, 00:08:01.324 { 00:08:01.324 "dma_device_id": "system", 00:08:01.324 "dma_device_type": 1 00:08:01.324 }, 00:08:01.324 { 00:08:01.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.324 "dma_device_type": 2 00:08:01.324 } 00:08:01.324 ], 00:08:01.324 "driver_specific": { 00:08:01.324 "raid": { 00:08:01.324 "uuid": "fb1033de-46fa-4d37-a3e3-384e06c0bde4", 00:08:01.324 "strip_size_kb": 0, 00:08:01.324 "state": "online", 00:08:01.324 "raid_level": "raid1", 00:08:01.324 "superblock": true, 00:08:01.324 "num_base_bdevs": 2, 00:08:01.324 "num_base_bdevs_discovered": 2, 00:08:01.324 "num_base_bdevs_operational": 2, 00:08:01.324 "base_bdevs_list": [ 00:08:01.324 { 00:08:01.324 "name": "BaseBdev1", 00:08:01.324 "uuid": "26f3c557-04a2-48f2-8483-9c68042d4f44", 00:08:01.324 "is_configured": true, 00:08:01.324 "data_offset": 2048, 00:08:01.324 "data_size": 63488 00:08:01.324 }, 00:08:01.324 { 00:08:01.324 "name": "BaseBdev2", 00:08:01.324 "uuid": "5261d522-1a62-4659-b810-e1f782497876", 00:08:01.324 "is_configured": true, 00:08:01.324 "data_offset": 2048, 00:08:01.324 "data_size": 63488 00:08:01.324 } 00:08:01.324 ] 00:08:01.324 } 00:08:01.324 } 00:08:01.324 }' 00:08:01.324 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:01.324 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:01.324 BaseBdev2' 00:08:01.324 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:01.324 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:01.324 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:01.324 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:01.324 04:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.324 04:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.324 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:01.324 04:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.324 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:01.324 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:01.324 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:01.324 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:01.324 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:01.324 04:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.324 04:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.324 04:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.324 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:01.324 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:01.324 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:01.324 04:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.324 04:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.324 [2024-12-13 04:24:01.329081] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:01.585 04:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.585 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:01.585 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:01.585 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:01.585 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:01.585 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:01.585 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:01.585 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.585 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:01.585 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:01.585 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:01.585 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:01.585 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.585 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.585 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.585 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.585 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.585 04:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.585 04:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.585 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.585 04:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.585 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.585 "name": "Existed_Raid", 00:08:01.585 "uuid": "fb1033de-46fa-4d37-a3e3-384e06c0bde4", 00:08:01.585 "strip_size_kb": 0, 00:08:01.585 "state": "online", 00:08:01.585 "raid_level": "raid1", 00:08:01.585 "superblock": true, 00:08:01.585 "num_base_bdevs": 2, 00:08:01.585 "num_base_bdevs_discovered": 1, 00:08:01.585 "num_base_bdevs_operational": 1, 00:08:01.585 "base_bdevs_list": [ 00:08:01.585 { 00:08:01.585 "name": null, 00:08:01.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.585 "is_configured": false, 00:08:01.585 "data_offset": 0, 00:08:01.585 "data_size": 63488 00:08:01.585 }, 00:08:01.585 { 00:08:01.585 "name": "BaseBdev2", 00:08:01.585 "uuid": "5261d522-1a62-4659-b810-e1f782497876", 00:08:01.585 "is_configured": true, 00:08:01.585 "data_offset": 2048, 00:08:01.585 "data_size": 63488 00:08:01.585 } 00:08:01.585 ] 00:08:01.585 }' 00:08:01.585 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.585 04:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.845 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:01.845 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:01.845 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.845 04:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.845 04:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.845 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:01.845 04:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.845 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:01.845 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:01.845 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:01.845 04:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.845 04:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.845 [2024-12-13 04:24:01.841025] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:01.845 [2024-12-13 04:24:01.841144] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:02.104 [2024-12-13 04:24:01.862099] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:02.104 [2024-12-13 04:24:01.862158] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:02.104 [2024-12-13 04:24:01.862171] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:08:02.104 04:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.104 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:02.104 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:02.104 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.104 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:02.104 04:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.104 04:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.104 04:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.104 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:02.104 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:02.104 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:02.104 04:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 75876 00:08:02.104 04:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75876 ']' 00:08:02.104 04:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 75876 00:08:02.104 04:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:02.104 04:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:02.105 04:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75876 00:08:02.105 04:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:02.105 killing process with pid 75876 00:08:02.105 04:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:02.105 04:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75876' 00:08:02.105 04:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 75876 00:08:02.105 [2024-12-13 04:24:01.945301] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:02.105 04:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 75876 00:08:02.105 [2024-12-13 04:24:01.946827] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:02.364 04:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:02.364 00:08:02.364 real 0m3.966s 00:08:02.364 user 0m6.110s 00:08:02.364 sys 0m0.835s 00:08:02.364 04:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.364 04:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.364 ************************************ 00:08:02.364 END TEST raid_state_function_test_sb 00:08:02.364 ************************************ 00:08:02.364 04:24:02 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:02.364 04:24:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:02.364 04:24:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.364 04:24:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:02.365 ************************************ 00:08:02.365 START TEST raid_superblock_test 00:08:02.365 ************************************ 00:08:02.365 04:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:08:02.365 04:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:02.365 04:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:02.365 04:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:02.365 04:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:02.365 04:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:02.365 04:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:02.365 04:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:02.365 04:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:02.365 04:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:02.365 04:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:02.365 04:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:02.365 04:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:02.365 04:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:02.365 04:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:02.365 04:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:02.365 04:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=76117 00:08:02.365 04:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:02.365 04:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 76117 00:08:02.365 04:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 76117 ']' 00:08:02.365 04:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.365 04:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:02.365 04:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.365 04:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:02.365 04:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.625 [2024-12-13 04:24:02.424149] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:02.625 [2024-12-13 04:24:02.424354] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76117 ] 00:08:02.625 [2024-12-13 04:24:02.581593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.625 [2024-12-13 04:24:02.622500] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.884 [2024-12-13 04:24:02.698937] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:02.884 [2024-12-13 04:24:02.698981] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:03.454 04:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:03.454 04:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:03.454 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:03.454 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:03.454 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:03.454 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:03.454 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:03.454 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:03.454 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:03.454 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:03.454 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:03.454 04:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.454 04:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.454 malloc1 00:08:03.454 04:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.454 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:03.454 04:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.454 04:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.454 [2024-12-13 04:24:03.275624] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:03.455 [2024-12-13 04:24:03.275740] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:03.455 [2024-12-13 04:24:03.275787] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:08:03.455 [2024-12-13 04:24:03.275823] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:03.455 [2024-12-13 04:24:03.278234] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:03.455 [2024-12-13 04:24:03.278309] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:03.455 pt1 00:08:03.455 04:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.455 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:03.455 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:03.455 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:03.455 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:03.455 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:03.455 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:03.455 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:03.455 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:03.455 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:03.455 04:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.455 04:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.455 malloc2 00:08:03.455 04:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.455 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:03.455 04:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.455 04:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.455 [2024-12-13 04:24:03.314173] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:03.455 [2024-12-13 04:24:03.314230] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:03.455 [2024-12-13 04:24:03.314249] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:03.455 [2024-12-13 04:24:03.314260] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:03.455 [2024-12-13 04:24:03.316644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:03.455 [2024-12-13 04:24:03.316678] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:03.455 pt2 00:08:03.455 04:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.455 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:03.455 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:03.455 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:03.455 04:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.455 04:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.455 [2024-12-13 04:24:03.326193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:03.455 [2024-12-13 04:24:03.328290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:03.455 [2024-12-13 04:24:03.328525] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:08:03.455 [2024-12-13 04:24:03.328546] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:03.455 [2024-12-13 04:24:03.328820] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:08:03.455 [2024-12-13 04:24:03.328970] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:08:03.455 [2024-12-13 04:24:03.328986] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:08:03.455 [2024-12-13 04:24:03.329123] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:03.455 04:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.455 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:03.455 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:03.455 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:03.455 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:03.455 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:03.455 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:03.455 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.455 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.455 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.455 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.455 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.455 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:03.455 04:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.455 04:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.455 04:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.455 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.455 "name": "raid_bdev1", 00:08:03.455 "uuid": "90144e9a-eda0-4776-a723-da162d0edd3a", 00:08:03.455 "strip_size_kb": 0, 00:08:03.455 "state": "online", 00:08:03.455 "raid_level": "raid1", 00:08:03.455 "superblock": true, 00:08:03.455 "num_base_bdevs": 2, 00:08:03.455 "num_base_bdevs_discovered": 2, 00:08:03.455 "num_base_bdevs_operational": 2, 00:08:03.455 "base_bdevs_list": [ 00:08:03.455 { 00:08:03.455 "name": "pt1", 00:08:03.455 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:03.455 "is_configured": true, 00:08:03.455 "data_offset": 2048, 00:08:03.455 "data_size": 63488 00:08:03.455 }, 00:08:03.455 { 00:08:03.455 "name": "pt2", 00:08:03.455 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:03.455 "is_configured": true, 00:08:03.455 "data_offset": 2048, 00:08:03.455 "data_size": 63488 00:08:03.455 } 00:08:03.455 ] 00:08:03.455 }' 00:08:03.455 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.455 04:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.025 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:04.025 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:04.025 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:04.025 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:04.025 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:04.025 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:04.025 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:04.025 04:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.025 04:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.025 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:04.025 [2024-12-13 04:24:03.753827] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:04.025 04:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.025 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:04.025 "name": "raid_bdev1", 00:08:04.025 "aliases": [ 00:08:04.025 "90144e9a-eda0-4776-a723-da162d0edd3a" 00:08:04.025 ], 00:08:04.025 "product_name": "Raid Volume", 00:08:04.025 "block_size": 512, 00:08:04.025 "num_blocks": 63488, 00:08:04.025 "uuid": "90144e9a-eda0-4776-a723-da162d0edd3a", 00:08:04.025 "assigned_rate_limits": { 00:08:04.025 "rw_ios_per_sec": 0, 00:08:04.025 "rw_mbytes_per_sec": 0, 00:08:04.025 "r_mbytes_per_sec": 0, 00:08:04.025 "w_mbytes_per_sec": 0 00:08:04.025 }, 00:08:04.025 "claimed": false, 00:08:04.025 "zoned": false, 00:08:04.025 "supported_io_types": { 00:08:04.025 "read": true, 00:08:04.025 "write": true, 00:08:04.025 "unmap": false, 00:08:04.025 "flush": false, 00:08:04.025 "reset": true, 00:08:04.025 "nvme_admin": false, 00:08:04.025 "nvme_io": false, 00:08:04.025 "nvme_io_md": false, 00:08:04.025 "write_zeroes": true, 00:08:04.025 "zcopy": false, 00:08:04.025 "get_zone_info": false, 00:08:04.025 "zone_management": false, 00:08:04.025 "zone_append": false, 00:08:04.025 "compare": false, 00:08:04.025 "compare_and_write": false, 00:08:04.025 "abort": false, 00:08:04.025 "seek_hole": false, 00:08:04.025 "seek_data": false, 00:08:04.025 "copy": false, 00:08:04.025 "nvme_iov_md": false 00:08:04.025 }, 00:08:04.025 "memory_domains": [ 00:08:04.025 { 00:08:04.025 "dma_device_id": "system", 00:08:04.025 "dma_device_type": 1 00:08:04.025 }, 00:08:04.025 { 00:08:04.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.025 "dma_device_type": 2 00:08:04.025 }, 00:08:04.025 { 00:08:04.025 "dma_device_id": "system", 00:08:04.025 "dma_device_type": 1 00:08:04.025 }, 00:08:04.025 { 00:08:04.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.026 "dma_device_type": 2 00:08:04.026 } 00:08:04.026 ], 00:08:04.026 "driver_specific": { 00:08:04.026 "raid": { 00:08:04.026 "uuid": "90144e9a-eda0-4776-a723-da162d0edd3a", 00:08:04.026 "strip_size_kb": 0, 00:08:04.026 "state": "online", 00:08:04.026 "raid_level": "raid1", 00:08:04.026 "superblock": true, 00:08:04.026 "num_base_bdevs": 2, 00:08:04.026 "num_base_bdevs_discovered": 2, 00:08:04.026 "num_base_bdevs_operational": 2, 00:08:04.026 "base_bdevs_list": [ 00:08:04.026 { 00:08:04.026 "name": "pt1", 00:08:04.026 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:04.026 "is_configured": true, 00:08:04.026 "data_offset": 2048, 00:08:04.026 "data_size": 63488 00:08:04.026 }, 00:08:04.026 { 00:08:04.026 "name": "pt2", 00:08:04.026 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:04.026 "is_configured": true, 00:08:04.026 "data_offset": 2048, 00:08:04.026 "data_size": 63488 00:08:04.026 } 00:08:04.026 ] 00:08:04.026 } 00:08:04.026 } 00:08:04.026 }' 00:08:04.026 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:04.026 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:04.026 pt2' 00:08:04.026 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.026 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:04.026 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:04.026 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.026 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:04.026 04:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.026 04:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.026 04:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.026 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:04.026 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:04.026 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:04.026 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.026 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:04.026 04:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.026 04:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.026 04:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.026 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:04.026 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:04.026 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:04.026 04:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:04.026 04:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.026 04:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.026 [2024-12-13 04:24:03.969250] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:04.026 04:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.026 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=90144e9a-eda0-4776-a723-da162d0edd3a 00:08:04.026 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 90144e9a-eda0-4776-a723-da162d0edd3a ']' 00:08:04.026 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:04.026 04:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.026 04:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.026 [2024-12-13 04:24:04.016945] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:04.026 [2024-12-13 04:24:04.017010] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:04.026 [2024-12-13 04:24:04.017107] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:04.026 [2024-12-13 04:24:04.017192] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:04.026 [2024-12-13 04:24:04.017255] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:08:04.026 04:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.026 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.026 04:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.026 04:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.026 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:04.026 04:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.286 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:04.286 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:04.286 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.287 [2024-12-13 04:24:04.156712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:04.287 [2024-12-13 04:24:04.158767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:04.287 [2024-12-13 04:24:04.158854] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:04.287 [2024-12-13 04:24:04.158904] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:04.287 [2024-12-13 04:24:04.158921] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:04.287 [2024-12-13 04:24:04.158930] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:08:04.287 request: 00:08:04.287 { 00:08:04.287 "name": "raid_bdev1", 00:08:04.287 "raid_level": "raid1", 00:08:04.287 "base_bdevs": [ 00:08:04.287 "malloc1", 00:08:04.287 "malloc2" 00:08:04.287 ], 00:08:04.287 "superblock": false, 00:08:04.287 "method": "bdev_raid_create", 00:08:04.287 "req_id": 1 00:08:04.287 } 00:08:04.287 Got JSON-RPC error response 00:08:04.287 response: 00:08:04.287 { 00:08:04.287 "code": -17, 00:08:04.287 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:04.287 } 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.287 [2024-12-13 04:24:04.224602] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:04.287 [2024-12-13 04:24:04.224654] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:04.287 [2024-12-13 04:24:04.224676] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:04.287 [2024-12-13 04:24:04.224684] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:04.287 [2024-12-13 04:24:04.227017] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:04.287 [2024-12-13 04:24:04.227117] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:04.287 [2024-12-13 04:24:04.227188] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:04.287 [2024-12-13 04:24:04.227219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:04.287 pt1 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.287 "name": "raid_bdev1", 00:08:04.287 "uuid": "90144e9a-eda0-4776-a723-da162d0edd3a", 00:08:04.287 "strip_size_kb": 0, 00:08:04.287 "state": "configuring", 00:08:04.287 "raid_level": "raid1", 00:08:04.287 "superblock": true, 00:08:04.287 "num_base_bdevs": 2, 00:08:04.287 "num_base_bdevs_discovered": 1, 00:08:04.287 "num_base_bdevs_operational": 2, 00:08:04.287 "base_bdevs_list": [ 00:08:04.287 { 00:08:04.287 "name": "pt1", 00:08:04.287 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:04.287 "is_configured": true, 00:08:04.287 "data_offset": 2048, 00:08:04.287 "data_size": 63488 00:08:04.287 }, 00:08:04.287 { 00:08:04.287 "name": null, 00:08:04.287 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:04.287 "is_configured": false, 00:08:04.287 "data_offset": 2048, 00:08:04.287 "data_size": 63488 00:08:04.287 } 00:08:04.287 ] 00:08:04.287 }' 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.287 04:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.856 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:04.856 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:04.856 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:04.857 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:04.857 04:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.857 04:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.857 [2024-12-13 04:24:04.655965] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:04.857 [2024-12-13 04:24:04.656010] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:04.857 [2024-12-13 04:24:04.656030] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:04.857 [2024-12-13 04:24:04.656039] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:04.857 [2024-12-13 04:24:04.656418] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:04.857 [2024-12-13 04:24:04.656434] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:04.857 [2024-12-13 04:24:04.656506] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:04.857 [2024-12-13 04:24:04.656524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:04.857 [2024-12-13 04:24:04.656615] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:04.857 [2024-12-13 04:24:04.656624] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:04.857 [2024-12-13 04:24:04.656885] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:04.857 [2024-12-13 04:24:04.656993] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:04.857 [2024-12-13 04:24:04.657008] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:08:04.857 [2024-12-13 04:24:04.657102] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:04.857 pt2 00:08:04.857 04:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.857 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:04.857 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:04.857 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:04.857 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:04.857 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:04.857 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:04.857 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:04.857 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:04.857 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.857 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.857 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.857 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.857 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:04.857 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.857 04:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.857 04:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.857 04:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.857 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.857 "name": "raid_bdev1", 00:08:04.857 "uuid": "90144e9a-eda0-4776-a723-da162d0edd3a", 00:08:04.857 "strip_size_kb": 0, 00:08:04.857 "state": "online", 00:08:04.857 "raid_level": "raid1", 00:08:04.857 "superblock": true, 00:08:04.857 "num_base_bdevs": 2, 00:08:04.857 "num_base_bdevs_discovered": 2, 00:08:04.857 "num_base_bdevs_operational": 2, 00:08:04.857 "base_bdevs_list": [ 00:08:04.857 { 00:08:04.857 "name": "pt1", 00:08:04.857 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:04.857 "is_configured": true, 00:08:04.857 "data_offset": 2048, 00:08:04.857 "data_size": 63488 00:08:04.857 }, 00:08:04.857 { 00:08:04.857 "name": "pt2", 00:08:04.857 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:04.857 "is_configured": true, 00:08:04.857 "data_offset": 2048, 00:08:04.857 "data_size": 63488 00:08:04.857 } 00:08:04.857 ] 00:08:04.857 }' 00:08:04.857 04:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.857 04:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.116 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:05.116 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:05.116 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:05.116 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:05.116 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:05.116 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:05.116 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:05.116 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:05.116 04:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.116 04:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.116 [2024-12-13 04:24:05.123416] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:05.375 04:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.375 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:05.375 "name": "raid_bdev1", 00:08:05.375 "aliases": [ 00:08:05.375 "90144e9a-eda0-4776-a723-da162d0edd3a" 00:08:05.375 ], 00:08:05.375 "product_name": "Raid Volume", 00:08:05.375 "block_size": 512, 00:08:05.375 "num_blocks": 63488, 00:08:05.375 "uuid": "90144e9a-eda0-4776-a723-da162d0edd3a", 00:08:05.375 "assigned_rate_limits": { 00:08:05.375 "rw_ios_per_sec": 0, 00:08:05.375 "rw_mbytes_per_sec": 0, 00:08:05.375 "r_mbytes_per_sec": 0, 00:08:05.375 "w_mbytes_per_sec": 0 00:08:05.375 }, 00:08:05.375 "claimed": false, 00:08:05.375 "zoned": false, 00:08:05.375 "supported_io_types": { 00:08:05.375 "read": true, 00:08:05.375 "write": true, 00:08:05.375 "unmap": false, 00:08:05.375 "flush": false, 00:08:05.375 "reset": true, 00:08:05.375 "nvme_admin": false, 00:08:05.375 "nvme_io": false, 00:08:05.375 "nvme_io_md": false, 00:08:05.375 "write_zeroes": true, 00:08:05.375 "zcopy": false, 00:08:05.375 "get_zone_info": false, 00:08:05.375 "zone_management": false, 00:08:05.375 "zone_append": false, 00:08:05.375 "compare": false, 00:08:05.375 "compare_and_write": false, 00:08:05.375 "abort": false, 00:08:05.375 "seek_hole": false, 00:08:05.375 "seek_data": false, 00:08:05.375 "copy": false, 00:08:05.375 "nvme_iov_md": false 00:08:05.376 }, 00:08:05.376 "memory_domains": [ 00:08:05.376 { 00:08:05.376 "dma_device_id": "system", 00:08:05.376 "dma_device_type": 1 00:08:05.376 }, 00:08:05.376 { 00:08:05.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.376 "dma_device_type": 2 00:08:05.376 }, 00:08:05.376 { 00:08:05.376 "dma_device_id": "system", 00:08:05.376 "dma_device_type": 1 00:08:05.376 }, 00:08:05.376 { 00:08:05.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.376 "dma_device_type": 2 00:08:05.376 } 00:08:05.376 ], 00:08:05.376 "driver_specific": { 00:08:05.376 "raid": { 00:08:05.376 "uuid": "90144e9a-eda0-4776-a723-da162d0edd3a", 00:08:05.376 "strip_size_kb": 0, 00:08:05.376 "state": "online", 00:08:05.376 "raid_level": "raid1", 00:08:05.376 "superblock": true, 00:08:05.376 "num_base_bdevs": 2, 00:08:05.376 "num_base_bdevs_discovered": 2, 00:08:05.376 "num_base_bdevs_operational": 2, 00:08:05.376 "base_bdevs_list": [ 00:08:05.376 { 00:08:05.376 "name": "pt1", 00:08:05.376 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:05.376 "is_configured": true, 00:08:05.376 "data_offset": 2048, 00:08:05.376 "data_size": 63488 00:08:05.376 }, 00:08:05.376 { 00:08:05.376 "name": "pt2", 00:08:05.376 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:05.376 "is_configured": true, 00:08:05.376 "data_offset": 2048, 00:08:05.376 "data_size": 63488 00:08:05.376 } 00:08:05.376 ] 00:08:05.376 } 00:08:05.376 } 00:08:05.376 }' 00:08:05.376 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:05.376 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:05.376 pt2' 00:08:05.376 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:05.376 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:05.376 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:05.376 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:05.376 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:05.376 04:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.376 04:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.376 04:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.376 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:05.376 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:05.376 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:05.376 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:05.376 04:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.376 04:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.376 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:05.376 04:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.376 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:05.376 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:05.376 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:05.376 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:05.376 04:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.376 04:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.376 [2024-12-13 04:24:05.370964] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:05.376 04:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.636 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 90144e9a-eda0-4776-a723-da162d0edd3a '!=' 90144e9a-eda0-4776-a723-da162d0edd3a ']' 00:08:05.636 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:05.636 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:05.636 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:05.636 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:05.636 04:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.636 04:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.636 [2024-12-13 04:24:05.414688] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:05.636 04:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.636 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:05.636 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:05.636 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:05.636 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:05.636 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:05.636 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:05.636 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.636 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.636 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.636 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.636 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:05.636 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.636 04:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.636 04:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.636 04:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.636 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.636 "name": "raid_bdev1", 00:08:05.636 "uuid": "90144e9a-eda0-4776-a723-da162d0edd3a", 00:08:05.636 "strip_size_kb": 0, 00:08:05.636 "state": "online", 00:08:05.636 "raid_level": "raid1", 00:08:05.636 "superblock": true, 00:08:05.636 "num_base_bdevs": 2, 00:08:05.636 "num_base_bdevs_discovered": 1, 00:08:05.636 "num_base_bdevs_operational": 1, 00:08:05.636 "base_bdevs_list": [ 00:08:05.636 { 00:08:05.636 "name": null, 00:08:05.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.636 "is_configured": false, 00:08:05.636 "data_offset": 0, 00:08:05.636 "data_size": 63488 00:08:05.636 }, 00:08:05.636 { 00:08:05.636 "name": "pt2", 00:08:05.636 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:05.636 "is_configured": true, 00:08:05.636 "data_offset": 2048, 00:08:05.636 "data_size": 63488 00:08:05.636 } 00:08:05.636 ] 00:08:05.636 }' 00:08:05.636 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.636 04:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.895 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:05.895 04:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.896 04:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.896 [2024-12-13 04:24:05.869897] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:05.896 [2024-12-13 04:24:05.869924] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:05.896 [2024-12-13 04:24:05.869984] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:05.896 [2024-12-13 04:24:05.870028] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:05.896 [2024-12-13 04:24:05.870037] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:08:05.896 04:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.896 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.896 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:05.896 04:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.896 04:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.896 04:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.155 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:06.155 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:06.155 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:06.155 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:06.155 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:06.155 04:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.155 04:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.155 04:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.155 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:06.155 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:06.155 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:06.155 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:06.155 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:08:06.155 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:06.155 04:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.155 04:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.155 [2024-12-13 04:24:05.941762] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:06.155 [2024-12-13 04:24:05.941807] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:06.155 [2024-12-13 04:24:05.941828] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:08:06.155 [2024-12-13 04:24:05.941837] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:06.155 [2024-12-13 04:24:05.944211] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:06.155 [2024-12-13 04:24:05.944283] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:06.155 [2024-12-13 04:24:05.944360] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:06.155 [2024-12-13 04:24:05.944414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:06.155 [2024-12-13 04:24:05.944519] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:06.155 [2024-12-13 04:24:05.944528] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:06.155 [2024-12-13 04:24:05.944759] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:08:06.155 [2024-12-13 04:24:05.944877] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:06.155 [2024-12-13 04:24:05.944888] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:08:06.155 [2024-12-13 04:24:05.944985] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:06.155 pt2 00:08:06.155 04:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.155 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:06.155 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:06.155 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:06.155 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:06.155 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:06.155 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:06.155 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.155 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.155 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.155 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.155 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.155 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:06.155 04:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.155 04:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.155 04:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.155 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.155 "name": "raid_bdev1", 00:08:06.155 "uuid": "90144e9a-eda0-4776-a723-da162d0edd3a", 00:08:06.155 "strip_size_kb": 0, 00:08:06.155 "state": "online", 00:08:06.155 "raid_level": "raid1", 00:08:06.155 "superblock": true, 00:08:06.155 "num_base_bdevs": 2, 00:08:06.155 "num_base_bdevs_discovered": 1, 00:08:06.155 "num_base_bdevs_operational": 1, 00:08:06.155 "base_bdevs_list": [ 00:08:06.155 { 00:08:06.156 "name": null, 00:08:06.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.156 "is_configured": false, 00:08:06.156 "data_offset": 2048, 00:08:06.156 "data_size": 63488 00:08:06.156 }, 00:08:06.156 { 00:08:06.156 "name": "pt2", 00:08:06.156 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:06.156 "is_configured": true, 00:08:06.156 "data_offset": 2048, 00:08:06.156 "data_size": 63488 00:08:06.156 } 00:08:06.156 ] 00:08:06.156 }' 00:08:06.156 04:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.156 04:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.415 04:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:06.415 04:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.415 04:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.415 [2024-12-13 04:24:06.345071] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:06.415 [2024-12-13 04:24:06.345135] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:06.415 [2024-12-13 04:24:06.345205] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:06.415 [2024-12-13 04:24:06.345254] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:06.415 [2024-12-13 04:24:06.345289] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:08:06.415 04:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.415 04:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.415 04:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.415 04:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.415 04:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:06.415 04:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.415 04:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:06.415 04:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:06.415 04:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:08:06.415 04:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:06.415 04:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.415 04:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.415 [2024-12-13 04:24:06.408964] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:06.415 [2024-12-13 04:24:06.409057] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:06.415 [2024-12-13 04:24:06.409088] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:08:06.415 [2024-12-13 04:24:06.409120] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:06.415 [2024-12-13 04:24:06.411486] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:06.415 [2024-12-13 04:24:06.411565] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:06.415 [2024-12-13 04:24:06.411662] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:06.416 [2024-12-13 04:24:06.411741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:06.416 [2024-12-13 04:24:06.411857] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:06.416 [2024-12-13 04:24:06.411909] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:06.416 [2024-12-13 04:24:06.411980] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:08:06.416 [2024-12-13 04:24:06.412058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:06.416 [2024-12-13 04:24:06.412155] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:08:06.416 [2024-12-13 04:24:06.412195] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:06.416 [2024-12-13 04:24:06.412437] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:08:06.416 [2024-12-13 04:24:06.412602] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:08:06.416 [2024-12-13 04:24:06.412638] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:08:06.416 [2024-12-13 04:24:06.412783] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:06.416 pt1 00:08:06.416 04:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.416 04:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:08:06.416 04:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:06.416 04:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:06.416 04:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:06.416 04:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:06.416 04:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:06.416 04:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:06.416 04:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.416 04:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.416 04:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.416 04:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.416 04:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:06.416 04:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.416 04:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.416 04:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.675 04:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.675 04:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.675 "name": "raid_bdev1", 00:08:06.675 "uuid": "90144e9a-eda0-4776-a723-da162d0edd3a", 00:08:06.675 "strip_size_kb": 0, 00:08:06.675 "state": "online", 00:08:06.675 "raid_level": "raid1", 00:08:06.675 "superblock": true, 00:08:06.675 "num_base_bdevs": 2, 00:08:06.675 "num_base_bdevs_discovered": 1, 00:08:06.675 "num_base_bdevs_operational": 1, 00:08:06.675 "base_bdevs_list": [ 00:08:06.675 { 00:08:06.675 "name": null, 00:08:06.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.675 "is_configured": false, 00:08:06.675 "data_offset": 2048, 00:08:06.675 "data_size": 63488 00:08:06.675 }, 00:08:06.675 { 00:08:06.675 "name": "pt2", 00:08:06.675 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:06.675 "is_configured": true, 00:08:06.675 "data_offset": 2048, 00:08:06.675 "data_size": 63488 00:08:06.675 } 00:08:06.675 ] 00:08:06.675 }' 00:08:06.675 04:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.675 04:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.935 04:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:06.935 04:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:06.935 04:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.935 04:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.935 04:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.935 04:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:06.935 04:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:06.935 04:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:06.935 04:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.935 04:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.935 [2024-12-13 04:24:06.888367] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:06.935 04:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.935 04:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 90144e9a-eda0-4776-a723-da162d0edd3a '!=' 90144e9a-eda0-4776-a723-da162d0edd3a ']' 00:08:06.935 04:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 76117 00:08:06.935 04:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 76117 ']' 00:08:06.935 04:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 76117 00:08:06.935 04:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:06.935 04:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:06.935 04:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76117 00:08:07.195 04:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:07.195 04:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:07.195 killing process with pid 76117 00:08:07.195 04:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76117' 00:08:07.195 04:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 76117 00:08:07.195 [2024-12-13 04:24:06.971591] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:07.195 [2024-12-13 04:24:06.971654] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:07.195 [2024-12-13 04:24:06.971694] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:07.195 [2024-12-13 04:24:06.971702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:08:07.195 04:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 76117 00:08:07.195 [2024-12-13 04:24:07.013830] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:07.455 04:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:07.455 00:08:07.455 real 0m4.994s 00:08:07.455 user 0m8.024s 00:08:07.455 sys 0m1.082s 00:08:07.455 ************************************ 00:08:07.455 END TEST raid_superblock_test 00:08:07.455 ************************************ 00:08:07.455 04:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.455 04:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.455 04:24:07 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:07.455 04:24:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:07.455 04:24:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.455 04:24:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:07.455 ************************************ 00:08:07.455 START TEST raid_read_error_test 00:08:07.455 ************************************ 00:08:07.455 04:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:08:07.455 04:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:07.455 04:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:07.455 04:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:07.455 04:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:07.455 04:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:07.455 04:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:07.455 04:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:07.455 04:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:07.455 04:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:07.455 04:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:07.455 04:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:07.455 04:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:07.455 04:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:07.455 04:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:07.455 04:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:07.455 04:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:07.455 04:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:07.455 04:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:07.455 04:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:07.455 04:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:07.455 04:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:07.455 04:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.DKDn8U1pHk 00:08:07.455 04:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76436 00:08:07.455 04:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:07.455 04:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76436 00:08:07.455 04:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 76436 ']' 00:08:07.455 04:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.455 04:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:07.455 04:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.455 04:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:07.455 04:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.715 [2024-12-13 04:24:07.514674] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:07.715 [2024-12-13 04:24:07.514891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76436 ] 00:08:07.715 [2024-12-13 04:24:07.671141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.715 [2024-12-13 04:24:07.709609] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.974 [2024-12-13 04:24:07.785755] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:07.974 [2024-12-13 04:24:07.785879] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:08.546 04:24:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:08.546 04:24:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:08.546 04:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:08.546 04:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:08.546 04:24:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.546 04:24:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.546 BaseBdev1_malloc 00:08:08.546 04:24:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.546 04:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:08.546 04:24:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.546 04:24:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.546 true 00:08:08.546 04:24:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.546 04:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:08.546 04:24:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.546 04:24:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.546 [2024-12-13 04:24:08.386494] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:08.546 [2024-12-13 04:24:08.386546] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:08.546 [2024-12-13 04:24:08.386571] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:08:08.546 [2024-12-13 04:24:08.386580] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:08.546 [2024-12-13 04:24:08.389040] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:08.546 [2024-12-13 04:24:08.389084] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:08.546 BaseBdev1 00:08:08.546 04:24:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.546 04:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:08.546 04:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:08.546 04:24:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.546 04:24:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.546 BaseBdev2_malloc 00:08:08.546 04:24:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.546 04:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:08.546 04:24:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.546 04:24:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.546 true 00:08:08.546 04:24:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.546 04:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:08.546 04:24:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.546 04:24:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.546 [2024-12-13 04:24:08.433092] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:08.546 [2024-12-13 04:24:08.433141] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:08.546 [2024-12-13 04:24:08.433162] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:08.546 [2024-12-13 04:24:08.433179] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:08.546 [2024-12-13 04:24:08.435626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:08.546 [2024-12-13 04:24:08.435662] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:08.546 BaseBdev2 00:08:08.546 04:24:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.546 04:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:08.546 04:24:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.547 04:24:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.547 [2024-12-13 04:24:08.445116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:08.547 [2024-12-13 04:24:08.447338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:08.547 [2024-12-13 04:24:08.447542] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:08.547 [2024-12-13 04:24:08.447555] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:08.547 [2024-12-13 04:24:08.447864] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:08:08.547 [2024-12-13 04:24:08.448054] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:08.547 [2024-12-13 04:24:08.448068] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:08:08.547 [2024-12-13 04:24:08.448196] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:08.547 04:24:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.547 04:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:08.547 04:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:08.547 04:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:08.547 04:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:08.547 04:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:08.547 04:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:08.547 04:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.547 04:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.547 04:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.547 04:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.547 04:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.547 04:24:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.547 04:24:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.547 04:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:08.547 04:24:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.547 04:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.547 "name": "raid_bdev1", 00:08:08.547 "uuid": "7a905f3f-2ee6-416d-aefc-73e6bf21532d", 00:08:08.547 "strip_size_kb": 0, 00:08:08.547 "state": "online", 00:08:08.547 "raid_level": "raid1", 00:08:08.547 "superblock": true, 00:08:08.547 "num_base_bdevs": 2, 00:08:08.547 "num_base_bdevs_discovered": 2, 00:08:08.547 "num_base_bdevs_operational": 2, 00:08:08.547 "base_bdevs_list": [ 00:08:08.547 { 00:08:08.547 "name": "BaseBdev1", 00:08:08.547 "uuid": "b8e4f9b6-c352-52cb-b930-15a675d6b872", 00:08:08.547 "is_configured": true, 00:08:08.547 "data_offset": 2048, 00:08:08.547 "data_size": 63488 00:08:08.547 }, 00:08:08.547 { 00:08:08.547 "name": "BaseBdev2", 00:08:08.547 "uuid": "a7218496-85b8-5a8d-aed4-ba819b918918", 00:08:08.547 "is_configured": true, 00:08:08.547 "data_offset": 2048, 00:08:08.547 "data_size": 63488 00:08:08.547 } 00:08:08.547 ] 00:08:08.547 }' 00:08:08.547 04:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.547 04:24:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.153 04:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:09.153 04:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:09.153 [2024-12-13 04:24:09.004670] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:08:10.091 04:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:10.091 04:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.091 04:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.091 04:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.091 04:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:10.091 04:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:10.091 04:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:10.091 04:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:10.091 04:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:10.091 04:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:10.091 04:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:10.091 04:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:10.091 04:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:10.091 04:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:10.091 04:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.091 04:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.091 04:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.091 04:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.091 04:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.091 04:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:10.091 04:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.091 04:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.091 04:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.091 04:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.091 "name": "raid_bdev1", 00:08:10.091 "uuid": "7a905f3f-2ee6-416d-aefc-73e6bf21532d", 00:08:10.091 "strip_size_kb": 0, 00:08:10.091 "state": "online", 00:08:10.091 "raid_level": "raid1", 00:08:10.091 "superblock": true, 00:08:10.091 "num_base_bdevs": 2, 00:08:10.091 "num_base_bdevs_discovered": 2, 00:08:10.091 "num_base_bdevs_operational": 2, 00:08:10.091 "base_bdevs_list": [ 00:08:10.091 { 00:08:10.091 "name": "BaseBdev1", 00:08:10.091 "uuid": "b8e4f9b6-c352-52cb-b930-15a675d6b872", 00:08:10.091 "is_configured": true, 00:08:10.091 "data_offset": 2048, 00:08:10.091 "data_size": 63488 00:08:10.091 }, 00:08:10.091 { 00:08:10.091 "name": "BaseBdev2", 00:08:10.091 "uuid": "a7218496-85b8-5a8d-aed4-ba819b918918", 00:08:10.091 "is_configured": true, 00:08:10.091 "data_offset": 2048, 00:08:10.091 "data_size": 63488 00:08:10.091 } 00:08:10.091 ] 00:08:10.091 }' 00:08:10.091 04:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.091 04:24:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.660 04:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:10.660 04:24:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.660 04:24:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.660 [2024-12-13 04:24:10.375148] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:10.660 [2024-12-13 04:24:10.375192] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:10.660 [2024-12-13 04:24:10.377983] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:10.660 [2024-12-13 04:24:10.378063] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:10.660 [2024-12-13 04:24:10.378179] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:10.660 [2024-12-13 04:24:10.378244] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:08:10.660 { 00:08:10.660 "results": [ 00:08:10.660 { 00:08:10.660 "job": "raid_bdev1", 00:08:10.660 "core_mask": "0x1", 00:08:10.660 "workload": "randrw", 00:08:10.660 "percentage": 50, 00:08:10.660 "status": "finished", 00:08:10.660 "queue_depth": 1, 00:08:10.660 "io_size": 131072, 00:08:10.660 "runtime": 1.371158, 00:08:10.660 "iops": 15839.895912797796, 00:08:10.660 "mibps": 1979.9869890997245, 00:08:10.660 "io_failed": 0, 00:08:10.660 "io_timeout": 0, 00:08:10.660 "avg_latency_us": 60.57683291409067, 00:08:10.660 "min_latency_us": 22.69344978165939, 00:08:10.660 "max_latency_us": 1423.7624454148472 00:08:10.660 } 00:08:10.660 ], 00:08:10.660 "core_count": 1 00:08:10.660 } 00:08:10.660 04:24:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.660 04:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76436 00:08:10.660 04:24:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 76436 ']' 00:08:10.660 04:24:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 76436 00:08:10.660 04:24:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:10.660 04:24:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:10.660 04:24:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76436 00:08:10.660 04:24:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:10.660 04:24:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:10.660 04:24:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76436' 00:08:10.660 killing process with pid 76436 00:08:10.660 04:24:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 76436 00:08:10.660 [2024-12-13 04:24:10.415248] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:10.660 04:24:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 76436 00:08:10.661 [2024-12-13 04:24:10.445062] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:10.920 04:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.DKDn8U1pHk 00:08:10.920 04:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:10.920 04:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:10.920 04:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:10.920 04:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:10.920 04:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:10.920 04:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:10.920 04:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:10.920 ************************************ 00:08:10.920 END TEST raid_read_error_test 00:08:10.920 ************************************ 00:08:10.920 00:08:10.920 real 0m3.365s 00:08:10.920 user 0m4.177s 00:08:10.920 sys 0m0.602s 00:08:10.920 04:24:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.920 04:24:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.920 04:24:10 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:10.920 04:24:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:10.920 04:24:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.920 04:24:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:10.920 ************************************ 00:08:10.920 START TEST raid_write_error_test 00:08:10.920 ************************************ 00:08:10.920 04:24:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:08:10.920 04:24:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:10.920 04:24:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:10.920 04:24:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:10.920 04:24:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:10.921 04:24:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:10.921 04:24:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:10.921 04:24:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:10.921 04:24:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:10.921 04:24:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:10.921 04:24:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:10.921 04:24:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:10.921 04:24:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:10.921 04:24:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:10.921 04:24:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:10.921 04:24:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:10.921 04:24:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:10.921 04:24:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:10.921 04:24:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:10.921 04:24:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:10.921 04:24:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:10.921 04:24:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:10.921 04:24:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.csDNTvA9iz 00:08:10.921 04:24:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76571 00:08:10.921 04:24:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:10.921 04:24:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76571 00:08:10.921 04:24:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 76571 ']' 00:08:10.921 04:24:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.921 04:24:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:10.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.921 04:24:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.921 04:24:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:10.921 04:24:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.180 [2024-12-13 04:24:10.956800] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:11.180 [2024-12-13 04:24:10.956999] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76571 ] 00:08:11.180 [2024-12-13 04:24:11.113837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.180 [2024-12-13 04:24:11.151847] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.439 [2024-12-13 04:24:11.228421] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:11.439 [2024-12-13 04:24:11.228466] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:12.010 04:24:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:12.010 04:24:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:12.010 04:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:12.010 04:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:12.010 04:24:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.010 04:24:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.010 BaseBdev1_malloc 00:08:12.010 04:24:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.010 04:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:12.010 04:24:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.010 04:24:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.010 true 00:08:12.010 04:24:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.010 04:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:12.010 04:24:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.010 04:24:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.010 [2024-12-13 04:24:11.821588] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:12.010 [2024-12-13 04:24:11.821701] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:12.010 [2024-12-13 04:24:11.821728] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:08:12.010 [2024-12-13 04:24:11.821737] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:12.010 [2024-12-13 04:24:11.824213] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:12.010 [2024-12-13 04:24:11.824248] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:12.010 BaseBdev1 00:08:12.010 04:24:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.010 04:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:12.010 04:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:12.010 04:24:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.010 04:24:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.010 BaseBdev2_malloc 00:08:12.010 04:24:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.010 04:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:12.010 04:24:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.010 04:24:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.010 true 00:08:12.010 04:24:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.010 04:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:12.010 04:24:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.010 04:24:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.010 [2024-12-13 04:24:11.867940] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:12.010 [2024-12-13 04:24:11.868046] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:12.010 [2024-12-13 04:24:11.868071] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:12.010 [2024-12-13 04:24:11.868090] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:12.010 [2024-12-13 04:24:11.870449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:12.010 [2024-12-13 04:24:11.870494] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:12.010 BaseBdev2 00:08:12.010 04:24:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.010 04:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:12.010 04:24:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.010 04:24:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.010 [2024-12-13 04:24:11.879971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:12.010 [2024-12-13 04:24:11.882100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:12.010 [2024-12-13 04:24:11.882307] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:12.010 [2024-12-13 04:24:11.882319] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:12.010 [2024-12-13 04:24:11.882591] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:08:12.010 [2024-12-13 04:24:11.882765] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:12.010 [2024-12-13 04:24:11.882800] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:08:12.010 [2024-12-13 04:24:11.882939] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:12.010 04:24:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.010 04:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:12.010 04:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:12.010 04:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:12.010 04:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:12.010 04:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:12.010 04:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:12.010 04:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.010 04:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.010 04:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.010 04:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.010 04:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.010 04:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:12.010 04:24:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.010 04:24:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.010 04:24:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.010 04:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.010 "name": "raid_bdev1", 00:08:12.010 "uuid": "b404ebb2-577a-4ff1-8d2a-e4a42e8e7a18", 00:08:12.010 "strip_size_kb": 0, 00:08:12.010 "state": "online", 00:08:12.010 "raid_level": "raid1", 00:08:12.010 "superblock": true, 00:08:12.010 "num_base_bdevs": 2, 00:08:12.010 "num_base_bdevs_discovered": 2, 00:08:12.010 "num_base_bdevs_operational": 2, 00:08:12.010 "base_bdevs_list": [ 00:08:12.010 { 00:08:12.010 "name": "BaseBdev1", 00:08:12.010 "uuid": "0479b0b5-8249-502e-8cc5-f4488486e62e", 00:08:12.010 "is_configured": true, 00:08:12.010 "data_offset": 2048, 00:08:12.010 "data_size": 63488 00:08:12.010 }, 00:08:12.010 { 00:08:12.010 "name": "BaseBdev2", 00:08:12.010 "uuid": "2d50b049-3300-502e-8055-dc85b55a8118", 00:08:12.010 "is_configured": true, 00:08:12.010 "data_offset": 2048, 00:08:12.010 "data_size": 63488 00:08:12.010 } 00:08:12.010 ] 00:08:12.010 }' 00:08:12.010 04:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.010 04:24:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.581 04:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:12.581 04:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:12.581 [2024-12-13 04:24:12.435521] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:08:13.520 04:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:13.520 04:24:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.520 04:24:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.520 [2024-12-13 04:24:13.347208] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:13.520 [2024-12-13 04:24:13.347336] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:13.520 [2024-12-13 04:24:13.347649] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002a10 00:08:13.520 04:24:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.520 04:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:13.520 04:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:13.520 04:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:13.520 04:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:13.520 04:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:13.520 04:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:13.520 04:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:13.520 04:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:13.520 04:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:13.520 04:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:13.520 04:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.520 04:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.520 04:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.520 04:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.520 04:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.520 04:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:13.520 04:24:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.520 04:24:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.520 04:24:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.520 04:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.520 "name": "raid_bdev1", 00:08:13.520 "uuid": "b404ebb2-577a-4ff1-8d2a-e4a42e8e7a18", 00:08:13.520 "strip_size_kb": 0, 00:08:13.520 "state": "online", 00:08:13.520 "raid_level": "raid1", 00:08:13.520 "superblock": true, 00:08:13.520 "num_base_bdevs": 2, 00:08:13.520 "num_base_bdevs_discovered": 1, 00:08:13.520 "num_base_bdevs_operational": 1, 00:08:13.520 "base_bdevs_list": [ 00:08:13.520 { 00:08:13.520 "name": null, 00:08:13.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.520 "is_configured": false, 00:08:13.520 "data_offset": 0, 00:08:13.520 "data_size": 63488 00:08:13.520 }, 00:08:13.520 { 00:08:13.520 "name": "BaseBdev2", 00:08:13.520 "uuid": "2d50b049-3300-502e-8055-dc85b55a8118", 00:08:13.520 "is_configured": true, 00:08:13.520 "data_offset": 2048, 00:08:13.520 "data_size": 63488 00:08:13.520 } 00:08:13.520 ] 00:08:13.520 }' 00:08:13.520 04:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.520 04:24:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.089 04:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:14.089 04:24:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.089 04:24:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.089 [2024-12-13 04:24:13.840347] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:14.089 [2024-12-13 04:24:13.840495] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:14.089 [2024-12-13 04:24:13.843177] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:14.089 [2024-12-13 04:24:13.843278] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:14.089 [2024-12-13 04:24:13.843361] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:14.089 [2024-12-13 04:24:13.843414] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:08:14.089 { 00:08:14.089 "results": [ 00:08:14.089 { 00:08:14.089 "job": "raid_bdev1", 00:08:14.089 "core_mask": "0x1", 00:08:14.089 "workload": "randrw", 00:08:14.089 "percentage": 50, 00:08:14.089 "status": "finished", 00:08:14.089 "queue_depth": 1, 00:08:14.089 "io_size": 131072, 00:08:14.089 "runtime": 1.405554, 00:08:14.089 "iops": 19615.752934430126, 00:08:14.089 "mibps": 2451.9691168037657, 00:08:14.089 "io_failed": 0, 00:08:14.089 "io_timeout": 0, 00:08:14.089 "avg_latency_us": 48.341028917955214, 00:08:14.089 "min_latency_us": 22.022707423580787, 00:08:14.089 "max_latency_us": 1366.5257641921398 00:08:14.089 } 00:08:14.089 ], 00:08:14.089 "core_count": 1 00:08:14.089 } 00:08:14.090 04:24:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.090 04:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76571 00:08:14.090 04:24:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 76571 ']' 00:08:14.090 04:24:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 76571 00:08:14.090 04:24:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:14.090 04:24:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:14.090 04:24:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76571 00:08:14.090 killing process with pid 76571 00:08:14.090 04:24:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:14.090 04:24:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:14.090 04:24:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76571' 00:08:14.090 04:24:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 76571 00:08:14.090 [2024-12-13 04:24:13.888855] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:14.090 04:24:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 76571 00:08:14.090 [2024-12-13 04:24:13.916895] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:14.349 04:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.csDNTvA9iz 00:08:14.349 04:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:14.349 04:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:14.349 04:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:14.349 04:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:14.349 04:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:14.349 04:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:14.349 04:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:14.349 00:08:14.350 real 0m3.393s 00:08:14.350 user 0m4.263s 00:08:14.350 sys 0m0.590s 00:08:14.350 ************************************ 00:08:14.350 END TEST raid_write_error_test 00:08:14.350 ************************************ 00:08:14.350 04:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.350 04:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.350 04:24:14 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:14.350 04:24:14 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:14.350 04:24:14 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:14.350 04:24:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:14.350 04:24:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.350 04:24:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:14.350 ************************************ 00:08:14.350 START TEST raid_state_function_test 00:08:14.350 ************************************ 00:08:14.350 04:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:08:14.350 04:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:14.350 04:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:14.350 04:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:14.350 04:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:14.350 04:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:14.350 04:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:14.350 04:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:14.350 04:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:14.350 04:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:14.350 04:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:14.350 04:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:14.350 04:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:14.350 04:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:14.350 04:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:14.350 04:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:14.350 04:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:14.350 04:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:14.350 04:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:14.350 04:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:14.350 04:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:14.350 04:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:14.350 04:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:14.350 04:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:14.350 04:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:14.350 04:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:14.350 04:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:14.350 04:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=76698 00:08:14.350 04:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:14.350 04:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 76698' 00:08:14.350 Process raid pid: 76698 00:08:14.350 04:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 76698 00:08:14.350 04:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 76698 ']' 00:08:14.350 04:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.350 04:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:14.350 04:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.350 04:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:14.350 04:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.610 [2024-12-13 04:24:14.418420] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:14.610 [2024-12-13 04:24:14.418618] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:14.610 [2024-12-13 04:24:14.573866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.610 [2024-12-13 04:24:14.612789] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.869 [2024-12-13 04:24:14.688404] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:14.869 [2024-12-13 04:24:14.688560] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:15.438 04:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:15.438 04:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:15.438 04:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:15.438 04:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.438 04:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.438 [2024-12-13 04:24:15.242326] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:15.438 [2024-12-13 04:24:15.242394] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:15.438 [2024-12-13 04:24:15.242406] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:15.438 [2024-12-13 04:24:15.242417] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:15.438 [2024-12-13 04:24:15.242423] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:15.438 [2024-12-13 04:24:15.242436] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:15.438 04:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.438 04:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:15.438 04:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.438 04:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.438 04:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.438 04:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.438 04:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.438 04:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.438 04:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.438 04:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.438 04:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.438 04:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.438 04:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.438 04:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.438 04:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.438 04:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.438 04:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.438 "name": "Existed_Raid", 00:08:15.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.438 "strip_size_kb": 64, 00:08:15.438 "state": "configuring", 00:08:15.438 "raid_level": "raid0", 00:08:15.438 "superblock": false, 00:08:15.438 "num_base_bdevs": 3, 00:08:15.438 "num_base_bdevs_discovered": 0, 00:08:15.438 "num_base_bdevs_operational": 3, 00:08:15.438 "base_bdevs_list": [ 00:08:15.438 { 00:08:15.438 "name": "BaseBdev1", 00:08:15.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.438 "is_configured": false, 00:08:15.438 "data_offset": 0, 00:08:15.438 "data_size": 0 00:08:15.438 }, 00:08:15.438 { 00:08:15.438 "name": "BaseBdev2", 00:08:15.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.438 "is_configured": false, 00:08:15.438 "data_offset": 0, 00:08:15.438 "data_size": 0 00:08:15.438 }, 00:08:15.438 { 00:08:15.438 "name": "BaseBdev3", 00:08:15.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.438 "is_configured": false, 00:08:15.438 "data_offset": 0, 00:08:15.438 "data_size": 0 00:08:15.438 } 00:08:15.438 ] 00:08:15.438 }' 00:08:15.438 04:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.438 04:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.697 04:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:15.698 04:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.698 04:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.698 [2024-12-13 04:24:15.669541] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:15.698 [2024-12-13 04:24:15.669625] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:08:15.698 04:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.698 04:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:15.698 04:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.698 04:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.698 [2024-12-13 04:24:15.677541] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:15.698 [2024-12-13 04:24:15.677617] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:15.698 [2024-12-13 04:24:15.677644] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:15.698 [2024-12-13 04:24:15.677666] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:15.698 [2024-12-13 04:24:15.677683] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:15.698 [2024-12-13 04:24:15.677704] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:15.698 04:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.698 04:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:15.698 04:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.698 04:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.698 [2024-12-13 04:24:15.700586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:15.698 BaseBdev1 00:08:15.698 04:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.698 04:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:15.698 04:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:15.698 04:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:15.698 04:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:15.698 04:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:15.698 04:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:15.698 04:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:15.698 04:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.698 04:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.958 04:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.958 04:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:15.958 04:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.958 04:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.958 [ 00:08:15.958 { 00:08:15.958 "name": "BaseBdev1", 00:08:15.958 "aliases": [ 00:08:15.958 "d41e362d-55d6-4eaa-a93a-c0982fccce1b" 00:08:15.958 ], 00:08:15.958 "product_name": "Malloc disk", 00:08:15.958 "block_size": 512, 00:08:15.958 "num_blocks": 65536, 00:08:15.958 "uuid": "d41e362d-55d6-4eaa-a93a-c0982fccce1b", 00:08:15.958 "assigned_rate_limits": { 00:08:15.958 "rw_ios_per_sec": 0, 00:08:15.958 "rw_mbytes_per_sec": 0, 00:08:15.958 "r_mbytes_per_sec": 0, 00:08:15.958 "w_mbytes_per_sec": 0 00:08:15.958 }, 00:08:15.958 "claimed": true, 00:08:15.958 "claim_type": "exclusive_write", 00:08:15.958 "zoned": false, 00:08:15.958 "supported_io_types": { 00:08:15.958 "read": true, 00:08:15.958 "write": true, 00:08:15.958 "unmap": true, 00:08:15.958 "flush": true, 00:08:15.958 "reset": true, 00:08:15.958 "nvme_admin": false, 00:08:15.958 "nvme_io": false, 00:08:15.958 "nvme_io_md": false, 00:08:15.958 "write_zeroes": true, 00:08:15.958 "zcopy": true, 00:08:15.958 "get_zone_info": false, 00:08:15.958 "zone_management": false, 00:08:15.958 "zone_append": false, 00:08:15.958 "compare": false, 00:08:15.958 "compare_and_write": false, 00:08:15.958 "abort": true, 00:08:15.958 "seek_hole": false, 00:08:15.958 "seek_data": false, 00:08:15.958 "copy": true, 00:08:15.958 "nvme_iov_md": false 00:08:15.958 }, 00:08:15.958 "memory_domains": [ 00:08:15.958 { 00:08:15.958 "dma_device_id": "system", 00:08:15.958 "dma_device_type": 1 00:08:15.958 }, 00:08:15.958 { 00:08:15.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.958 "dma_device_type": 2 00:08:15.958 } 00:08:15.958 ], 00:08:15.958 "driver_specific": {} 00:08:15.958 } 00:08:15.958 ] 00:08:15.958 04:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.958 04:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:15.958 04:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:15.958 04:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.958 04:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.958 04:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.958 04:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.958 04:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.958 04:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.958 04:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.958 04:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.958 04:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.958 04:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.958 04:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.958 04:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.958 04:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.958 04:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.958 04:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.958 "name": "Existed_Raid", 00:08:15.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.958 "strip_size_kb": 64, 00:08:15.958 "state": "configuring", 00:08:15.958 "raid_level": "raid0", 00:08:15.958 "superblock": false, 00:08:15.958 "num_base_bdevs": 3, 00:08:15.958 "num_base_bdevs_discovered": 1, 00:08:15.958 "num_base_bdevs_operational": 3, 00:08:15.958 "base_bdevs_list": [ 00:08:15.958 { 00:08:15.958 "name": "BaseBdev1", 00:08:15.958 "uuid": "d41e362d-55d6-4eaa-a93a-c0982fccce1b", 00:08:15.958 "is_configured": true, 00:08:15.958 "data_offset": 0, 00:08:15.958 "data_size": 65536 00:08:15.958 }, 00:08:15.958 { 00:08:15.958 "name": "BaseBdev2", 00:08:15.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.958 "is_configured": false, 00:08:15.958 "data_offset": 0, 00:08:15.958 "data_size": 0 00:08:15.958 }, 00:08:15.958 { 00:08:15.958 "name": "BaseBdev3", 00:08:15.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.958 "is_configured": false, 00:08:15.958 "data_offset": 0, 00:08:15.958 "data_size": 0 00:08:15.958 } 00:08:15.958 ] 00:08:15.958 }' 00:08:15.958 04:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.958 04:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.218 04:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:16.218 04:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.218 04:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.218 [2024-12-13 04:24:16.155815] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:16.218 [2024-12-13 04:24:16.155931] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:08:16.218 04:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.218 04:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:16.218 04:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.218 04:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.218 [2024-12-13 04:24:16.167834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:16.218 [2024-12-13 04:24:16.170049] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:16.218 [2024-12-13 04:24:16.170125] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:16.218 [2024-12-13 04:24:16.170153] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:16.218 [2024-12-13 04:24:16.170176] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:16.218 04:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.218 04:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:16.218 04:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:16.218 04:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:16.218 04:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.218 04:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.218 04:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:16.218 04:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.218 04:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:16.218 04:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.218 04:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.218 04:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.218 04:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.218 04:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.218 04:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.218 04:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.218 04:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.218 04:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.218 04:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.219 "name": "Existed_Raid", 00:08:16.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.219 "strip_size_kb": 64, 00:08:16.219 "state": "configuring", 00:08:16.219 "raid_level": "raid0", 00:08:16.219 "superblock": false, 00:08:16.219 "num_base_bdevs": 3, 00:08:16.219 "num_base_bdevs_discovered": 1, 00:08:16.219 "num_base_bdevs_operational": 3, 00:08:16.219 "base_bdevs_list": [ 00:08:16.219 { 00:08:16.219 "name": "BaseBdev1", 00:08:16.219 "uuid": "d41e362d-55d6-4eaa-a93a-c0982fccce1b", 00:08:16.219 "is_configured": true, 00:08:16.219 "data_offset": 0, 00:08:16.219 "data_size": 65536 00:08:16.219 }, 00:08:16.219 { 00:08:16.219 "name": "BaseBdev2", 00:08:16.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.219 "is_configured": false, 00:08:16.219 "data_offset": 0, 00:08:16.219 "data_size": 0 00:08:16.219 }, 00:08:16.219 { 00:08:16.219 "name": "BaseBdev3", 00:08:16.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.219 "is_configured": false, 00:08:16.219 "data_offset": 0, 00:08:16.219 "data_size": 0 00:08:16.219 } 00:08:16.219 ] 00:08:16.219 }' 00:08:16.219 04:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.219 04:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.788 04:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:16.788 04:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.788 04:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.788 [2024-12-13 04:24:16.579803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:16.788 BaseBdev2 00:08:16.788 04:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.788 04:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:16.788 04:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:16.788 04:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:16.788 04:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:16.788 04:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:16.788 04:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:16.788 04:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:16.788 04:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.788 04:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.788 04:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.788 04:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:16.788 04:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.788 04:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.788 [ 00:08:16.788 { 00:08:16.788 "name": "BaseBdev2", 00:08:16.788 "aliases": [ 00:08:16.788 "1c36e00d-f188-4f62-ad09-017242bf1595" 00:08:16.788 ], 00:08:16.788 "product_name": "Malloc disk", 00:08:16.788 "block_size": 512, 00:08:16.788 "num_blocks": 65536, 00:08:16.788 "uuid": "1c36e00d-f188-4f62-ad09-017242bf1595", 00:08:16.788 "assigned_rate_limits": { 00:08:16.788 "rw_ios_per_sec": 0, 00:08:16.788 "rw_mbytes_per_sec": 0, 00:08:16.788 "r_mbytes_per_sec": 0, 00:08:16.788 "w_mbytes_per_sec": 0 00:08:16.788 }, 00:08:16.788 "claimed": true, 00:08:16.788 "claim_type": "exclusive_write", 00:08:16.788 "zoned": false, 00:08:16.788 "supported_io_types": { 00:08:16.788 "read": true, 00:08:16.788 "write": true, 00:08:16.788 "unmap": true, 00:08:16.788 "flush": true, 00:08:16.788 "reset": true, 00:08:16.788 "nvme_admin": false, 00:08:16.788 "nvme_io": false, 00:08:16.788 "nvme_io_md": false, 00:08:16.788 "write_zeroes": true, 00:08:16.788 "zcopy": true, 00:08:16.788 "get_zone_info": false, 00:08:16.788 "zone_management": false, 00:08:16.788 "zone_append": false, 00:08:16.788 "compare": false, 00:08:16.788 "compare_and_write": false, 00:08:16.788 "abort": true, 00:08:16.788 "seek_hole": false, 00:08:16.788 "seek_data": false, 00:08:16.788 "copy": true, 00:08:16.788 "nvme_iov_md": false 00:08:16.788 }, 00:08:16.788 "memory_domains": [ 00:08:16.788 { 00:08:16.788 "dma_device_id": "system", 00:08:16.788 "dma_device_type": 1 00:08:16.788 }, 00:08:16.788 { 00:08:16.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.788 "dma_device_type": 2 00:08:16.788 } 00:08:16.788 ], 00:08:16.788 "driver_specific": {} 00:08:16.788 } 00:08:16.788 ] 00:08:16.788 04:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.788 04:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:16.788 04:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:16.788 04:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:16.788 04:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:16.788 04:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.788 04:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.788 04:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:16.788 04:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.788 04:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:16.788 04:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.788 04:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.788 04:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.788 04:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.788 04:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.788 04:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.788 04:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.788 04:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.788 04:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.788 04:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.788 "name": "Existed_Raid", 00:08:16.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.788 "strip_size_kb": 64, 00:08:16.788 "state": "configuring", 00:08:16.788 "raid_level": "raid0", 00:08:16.788 "superblock": false, 00:08:16.789 "num_base_bdevs": 3, 00:08:16.789 "num_base_bdevs_discovered": 2, 00:08:16.789 "num_base_bdevs_operational": 3, 00:08:16.789 "base_bdevs_list": [ 00:08:16.789 { 00:08:16.789 "name": "BaseBdev1", 00:08:16.789 "uuid": "d41e362d-55d6-4eaa-a93a-c0982fccce1b", 00:08:16.789 "is_configured": true, 00:08:16.789 "data_offset": 0, 00:08:16.789 "data_size": 65536 00:08:16.789 }, 00:08:16.789 { 00:08:16.789 "name": "BaseBdev2", 00:08:16.789 "uuid": "1c36e00d-f188-4f62-ad09-017242bf1595", 00:08:16.789 "is_configured": true, 00:08:16.789 "data_offset": 0, 00:08:16.789 "data_size": 65536 00:08:16.789 }, 00:08:16.789 { 00:08:16.789 "name": "BaseBdev3", 00:08:16.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.789 "is_configured": false, 00:08:16.789 "data_offset": 0, 00:08:16.789 "data_size": 0 00:08:16.789 } 00:08:16.789 ] 00:08:16.789 }' 00:08:16.789 04:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.789 04:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.048 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:17.048 04:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.048 04:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.307 [2024-12-13 04:24:17.079669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:17.307 [2024-12-13 04:24:17.079798] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:17.307 [2024-12-13 04:24:17.079837] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:17.307 [2024-12-13 04:24:17.080242] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:17.307 [2024-12-13 04:24:17.080554] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:17.307 [2024-12-13 04:24:17.080612] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:08:17.307 [2024-12-13 04:24:17.080935] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:17.307 BaseBdev3 00:08:17.307 04:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.307 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:17.307 04:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:17.307 04:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:17.307 04:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:17.307 04:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:17.307 04:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:17.307 04:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:17.307 04:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.307 04:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.307 04:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.307 04:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:17.307 04:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.307 04:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.307 [ 00:08:17.307 { 00:08:17.307 "name": "BaseBdev3", 00:08:17.307 "aliases": [ 00:08:17.307 "4a7314fe-c04b-4d9e-ab40-30720f17a86c" 00:08:17.307 ], 00:08:17.307 "product_name": "Malloc disk", 00:08:17.307 "block_size": 512, 00:08:17.307 "num_blocks": 65536, 00:08:17.307 "uuid": "4a7314fe-c04b-4d9e-ab40-30720f17a86c", 00:08:17.307 "assigned_rate_limits": { 00:08:17.307 "rw_ios_per_sec": 0, 00:08:17.307 "rw_mbytes_per_sec": 0, 00:08:17.307 "r_mbytes_per_sec": 0, 00:08:17.307 "w_mbytes_per_sec": 0 00:08:17.308 }, 00:08:17.308 "claimed": true, 00:08:17.308 "claim_type": "exclusive_write", 00:08:17.308 "zoned": false, 00:08:17.308 "supported_io_types": { 00:08:17.308 "read": true, 00:08:17.308 "write": true, 00:08:17.308 "unmap": true, 00:08:17.308 "flush": true, 00:08:17.308 "reset": true, 00:08:17.308 "nvme_admin": false, 00:08:17.308 "nvme_io": false, 00:08:17.308 "nvme_io_md": false, 00:08:17.308 "write_zeroes": true, 00:08:17.308 "zcopy": true, 00:08:17.308 "get_zone_info": false, 00:08:17.308 "zone_management": false, 00:08:17.308 "zone_append": false, 00:08:17.308 "compare": false, 00:08:17.308 "compare_and_write": false, 00:08:17.308 "abort": true, 00:08:17.308 "seek_hole": false, 00:08:17.308 "seek_data": false, 00:08:17.308 "copy": true, 00:08:17.308 "nvme_iov_md": false 00:08:17.308 }, 00:08:17.308 "memory_domains": [ 00:08:17.308 { 00:08:17.308 "dma_device_id": "system", 00:08:17.308 "dma_device_type": 1 00:08:17.308 }, 00:08:17.308 { 00:08:17.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.308 "dma_device_type": 2 00:08:17.308 } 00:08:17.308 ], 00:08:17.308 "driver_specific": {} 00:08:17.308 } 00:08:17.308 ] 00:08:17.308 04:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.308 04:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:17.308 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:17.308 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:17.308 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:17.308 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.308 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:17.308 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:17.308 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:17.308 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:17.308 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.308 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.308 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.308 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.308 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.308 04:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.308 04:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.308 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.308 04:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.308 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.308 "name": "Existed_Raid", 00:08:17.308 "uuid": "cb5cc250-9d57-4480-abdf-5e94b9cf8b7d", 00:08:17.308 "strip_size_kb": 64, 00:08:17.308 "state": "online", 00:08:17.308 "raid_level": "raid0", 00:08:17.308 "superblock": false, 00:08:17.308 "num_base_bdevs": 3, 00:08:17.308 "num_base_bdevs_discovered": 3, 00:08:17.308 "num_base_bdevs_operational": 3, 00:08:17.308 "base_bdevs_list": [ 00:08:17.308 { 00:08:17.308 "name": "BaseBdev1", 00:08:17.308 "uuid": "d41e362d-55d6-4eaa-a93a-c0982fccce1b", 00:08:17.308 "is_configured": true, 00:08:17.308 "data_offset": 0, 00:08:17.308 "data_size": 65536 00:08:17.308 }, 00:08:17.308 { 00:08:17.308 "name": "BaseBdev2", 00:08:17.308 "uuid": "1c36e00d-f188-4f62-ad09-017242bf1595", 00:08:17.308 "is_configured": true, 00:08:17.308 "data_offset": 0, 00:08:17.308 "data_size": 65536 00:08:17.308 }, 00:08:17.308 { 00:08:17.308 "name": "BaseBdev3", 00:08:17.308 "uuid": "4a7314fe-c04b-4d9e-ab40-30720f17a86c", 00:08:17.308 "is_configured": true, 00:08:17.308 "data_offset": 0, 00:08:17.308 "data_size": 65536 00:08:17.308 } 00:08:17.308 ] 00:08:17.308 }' 00:08:17.308 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.308 04:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.568 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:17.568 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:17.568 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:17.568 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:17.568 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:17.568 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:17.568 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:17.568 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:17.568 04:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.568 04:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.568 [2024-12-13 04:24:17.567132] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:17.828 04:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.828 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:17.828 "name": "Existed_Raid", 00:08:17.828 "aliases": [ 00:08:17.828 "cb5cc250-9d57-4480-abdf-5e94b9cf8b7d" 00:08:17.828 ], 00:08:17.828 "product_name": "Raid Volume", 00:08:17.828 "block_size": 512, 00:08:17.828 "num_blocks": 196608, 00:08:17.828 "uuid": "cb5cc250-9d57-4480-abdf-5e94b9cf8b7d", 00:08:17.828 "assigned_rate_limits": { 00:08:17.828 "rw_ios_per_sec": 0, 00:08:17.828 "rw_mbytes_per_sec": 0, 00:08:17.828 "r_mbytes_per_sec": 0, 00:08:17.828 "w_mbytes_per_sec": 0 00:08:17.828 }, 00:08:17.828 "claimed": false, 00:08:17.828 "zoned": false, 00:08:17.828 "supported_io_types": { 00:08:17.828 "read": true, 00:08:17.828 "write": true, 00:08:17.828 "unmap": true, 00:08:17.828 "flush": true, 00:08:17.828 "reset": true, 00:08:17.828 "nvme_admin": false, 00:08:17.828 "nvme_io": false, 00:08:17.828 "nvme_io_md": false, 00:08:17.828 "write_zeroes": true, 00:08:17.828 "zcopy": false, 00:08:17.828 "get_zone_info": false, 00:08:17.828 "zone_management": false, 00:08:17.828 "zone_append": false, 00:08:17.828 "compare": false, 00:08:17.828 "compare_and_write": false, 00:08:17.828 "abort": false, 00:08:17.828 "seek_hole": false, 00:08:17.828 "seek_data": false, 00:08:17.828 "copy": false, 00:08:17.828 "nvme_iov_md": false 00:08:17.828 }, 00:08:17.828 "memory_domains": [ 00:08:17.828 { 00:08:17.828 "dma_device_id": "system", 00:08:17.828 "dma_device_type": 1 00:08:17.828 }, 00:08:17.828 { 00:08:17.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.828 "dma_device_type": 2 00:08:17.828 }, 00:08:17.828 { 00:08:17.828 "dma_device_id": "system", 00:08:17.828 "dma_device_type": 1 00:08:17.828 }, 00:08:17.828 { 00:08:17.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.828 "dma_device_type": 2 00:08:17.828 }, 00:08:17.828 { 00:08:17.828 "dma_device_id": "system", 00:08:17.828 "dma_device_type": 1 00:08:17.828 }, 00:08:17.828 { 00:08:17.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.828 "dma_device_type": 2 00:08:17.828 } 00:08:17.828 ], 00:08:17.828 "driver_specific": { 00:08:17.828 "raid": { 00:08:17.828 "uuid": "cb5cc250-9d57-4480-abdf-5e94b9cf8b7d", 00:08:17.828 "strip_size_kb": 64, 00:08:17.828 "state": "online", 00:08:17.828 "raid_level": "raid0", 00:08:17.828 "superblock": false, 00:08:17.828 "num_base_bdevs": 3, 00:08:17.828 "num_base_bdevs_discovered": 3, 00:08:17.828 "num_base_bdevs_operational": 3, 00:08:17.828 "base_bdevs_list": [ 00:08:17.828 { 00:08:17.828 "name": "BaseBdev1", 00:08:17.828 "uuid": "d41e362d-55d6-4eaa-a93a-c0982fccce1b", 00:08:17.828 "is_configured": true, 00:08:17.828 "data_offset": 0, 00:08:17.828 "data_size": 65536 00:08:17.828 }, 00:08:17.828 { 00:08:17.828 "name": "BaseBdev2", 00:08:17.828 "uuid": "1c36e00d-f188-4f62-ad09-017242bf1595", 00:08:17.828 "is_configured": true, 00:08:17.828 "data_offset": 0, 00:08:17.828 "data_size": 65536 00:08:17.828 }, 00:08:17.828 { 00:08:17.828 "name": "BaseBdev3", 00:08:17.828 "uuid": "4a7314fe-c04b-4d9e-ab40-30720f17a86c", 00:08:17.828 "is_configured": true, 00:08:17.828 "data_offset": 0, 00:08:17.828 "data_size": 65536 00:08:17.828 } 00:08:17.828 ] 00:08:17.828 } 00:08:17.828 } 00:08:17.828 }' 00:08:17.828 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:17.828 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:17.828 BaseBdev2 00:08:17.828 BaseBdev3' 00:08:17.828 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.828 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:17.828 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:17.828 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.828 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:17.828 04:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.828 04:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.828 04:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.828 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:17.828 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:17.828 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:17.828 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.828 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:17.828 04:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.828 04:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.828 04:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.828 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:17.828 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:17.828 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:17.828 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:17.828 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.828 04:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.828 04:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.828 04:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.088 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:18.088 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:18.088 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:18.088 04:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.088 04:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.088 [2024-12-13 04:24:17.866421] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:18.088 [2024-12-13 04:24:17.866506] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:18.088 [2024-12-13 04:24:17.866601] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:18.088 04:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.088 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:18.088 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:18.088 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:18.088 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:18.088 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:18.088 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:18.088 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:18.088 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:18.088 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:18.088 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.088 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:18.088 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.088 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.088 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.088 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.088 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.088 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.089 04:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.089 04:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.089 04:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.089 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.089 "name": "Existed_Raid", 00:08:18.089 "uuid": "cb5cc250-9d57-4480-abdf-5e94b9cf8b7d", 00:08:18.089 "strip_size_kb": 64, 00:08:18.089 "state": "offline", 00:08:18.089 "raid_level": "raid0", 00:08:18.089 "superblock": false, 00:08:18.089 "num_base_bdevs": 3, 00:08:18.089 "num_base_bdevs_discovered": 2, 00:08:18.089 "num_base_bdevs_operational": 2, 00:08:18.089 "base_bdevs_list": [ 00:08:18.089 { 00:08:18.089 "name": null, 00:08:18.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.089 "is_configured": false, 00:08:18.089 "data_offset": 0, 00:08:18.089 "data_size": 65536 00:08:18.089 }, 00:08:18.089 { 00:08:18.089 "name": "BaseBdev2", 00:08:18.089 "uuid": "1c36e00d-f188-4f62-ad09-017242bf1595", 00:08:18.089 "is_configured": true, 00:08:18.089 "data_offset": 0, 00:08:18.089 "data_size": 65536 00:08:18.089 }, 00:08:18.089 { 00:08:18.089 "name": "BaseBdev3", 00:08:18.089 "uuid": "4a7314fe-c04b-4d9e-ab40-30720f17a86c", 00:08:18.089 "is_configured": true, 00:08:18.089 "data_offset": 0, 00:08:18.089 "data_size": 65536 00:08:18.089 } 00:08:18.089 ] 00:08:18.089 }' 00:08:18.089 04:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.089 04:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.348 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:18.348 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:18.348 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:18.348 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.349 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.349 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.349 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.349 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:18.349 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:18.349 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:18.349 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.349 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.349 [2024-12-13 04:24:18.262669] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:18.349 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.349 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:18.349 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:18.349 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.349 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:18.349 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.349 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.349 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.349 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:18.349 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:18.349 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:18.349 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.349 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.349 [2024-12-13 04:24:18.343167] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:18.349 [2024-12-13 04:24:18.343220] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.609 BaseBdev2 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.609 [ 00:08:18.609 { 00:08:18.609 "name": "BaseBdev2", 00:08:18.609 "aliases": [ 00:08:18.609 "8c8da0a1-bb0e-4765-8f87-4d903278666e" 00:08:18.609 ], 00:08:18.609 "product_name": "Malloc disk", 00:08:18.609 "block_size": 512, 00:08:18.609 "num_blocks": 65536, 00:08:18.609 "uuid": "8c8da0a1-bb0e-4765-8f87-4d903278666e", 00:08:18.609 "assigned_rate_limits": { 00:08:18.609 "rw_ios_per_sec": 0, 00:08:18.609 "rw_mbytes_per_sec": 0, 00:08:18.609 "r_mbytes_per_sec": 0, 00:08:18.609 "w_mbytes_per_sec": 0 00:08:18.609 }, 00:08:18.609 "claimed": false, 00:08:18.609 "zoned": false, 00:08:18.609 "supported_io_types": { 00:08:18.609 "read": true, 00:08:18.609 "write": true, 00:08:18.609 "unmap": true, 00:08:18.609 "flush": true, 00:08:18.609 "reset": true, 00:08:18.609 "nvme_admin": false, 00:08:18.609 "nvme_io": false, 00:08:18.609 "nvme_io_md": false, 00:08:18.609 "write_zeroes": true, 00:08:18.609 "zcopy": true, 00:08:18.609 "get_zone_info": false, 00:08:18.609 "zone_management": false, 00:08:18.609 "zone_append": false, 00:08:18.609 "compare": false, 00:08:18.609 "compare_and_write": false, 00:08:18.609 "abort": true, 00:08:18.609 "seek_hole": false, 00:08:18.609 "seek_data": false, 00:08:18.609 "copy": true, 00:08:18.609 "nvme_iov_md": false 00:08:18.609 }, 00:08:18.609 "memory_domains": [ 00:08:18.609 { 00:08:18.609 "dma_device_id": "system", 00:08:18.609 "dma_device_type": 1 00:08:18.609 }, 00:08:18.609 { 00:08:18.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.609 "dma_device_type": 2 00:08:18.609 } 00:08:18.609 ], 00:08:18.609 "driver_specific": {} 00:08:18.609 } 00:08:18.609 ] 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.609 BaseBdev3 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.609 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.609 [ 00:08:18.609 { 00:08:18.609 "name": "BaseBdev3", 00:08:18.609 "aliases": [ 00:08:18.610 "64cc2228-bcd2-49e8-a35e-8d3dbe8c3b28" 00:08:18.610 ], 00:08:18.610 "product_name": "Malloc disk", 00:08:18.610 "block_size": 512, 00:08:18.610 "num_blocks": 65536, 00:08:18.610 "uuid": "64cc2228-bcd2-49e8-a35e-8d3dbe8c3b28", 00:08:18.610 "assigned_rate_limits": { 00:08:18.610 "rw_ios_per_sec": 0, 00:08:18.610 "rw_mbytes_per_sec": 0, 00:08:18.610 "r_mbytes_per_sec": 0, 00:08:18.610 "w_mbytes_per_sec": 0 00:08:18.610 }, 00:08:18.610 "claimed": false, 00:08:18.610 "zoned": false, 00:08:18.610 "supported_io_types": { 00:08:18.610 "read": true, 00:08:18.610 "write": true, 00:08:18.610 "unmap": true, 00:08:18.610 "flush": true, 00:08:18.610 "reset": true, 00:08:18.610 "nvme_admin": false, 00:08:18.610 "nvme_io": false, 00:08:18.610 "nvme_io_md": false, 00:08:18.610 "write_zeroes": true, 00:08:18.610 "zcopy": true, 00:08:18.610 "get_zone_info": false, 00:08:18.610 "zone_management": false, 00:08:18.610 "zone_append": false, 00:08:18.610 "compare": false, 00:08:18.610 "compare_and_write": false, 00:08:18.610 "abort": true, 00:08:18.610 "seek_hole": false, 00:08:18.610 "seek_data": false, 00:08:18.610 "copy": true, 00:08:18.610 "nvme_iov_md": false 00:08:18.610 }, 00:08:18.610 "memory_domains": [ 00:08:18.610 { 00:08:18.610 "dma_device_id": "system", 00:08:18.610 "dma_device_type": 1 00:08:18.610 }, 00:08:18.610 { 00:08:18.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.610 "dma_device_type": 2 00:08:18.610 } 00:08:18.610 ], 00:08:18.610 "driver_specific": {} 00:08:18.610 } 00:08:18.610 ] 00:08:18.610 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.610 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:18.610 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:18.610 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:18.610 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:18.610 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.610 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.610 [2024-12-13 04:24:18.537997] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:18.610 [2024-12-13 04:24:18.538089] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:18.610 [2024-12-13 04:24:18.538132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:18.610 [2024-12-13 04:24:18.540280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:18.610 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.610 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:18.610 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:18.610 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:18.610 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:18.610 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.610 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:18.610 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.610 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.610 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.610 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.610 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.610 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.610 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.610 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.610 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.610 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.610 "name": "Existed_Raid", 00:08:18.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.610 "strip_size_kb": 64, 00:08:18.610 "state": "configuring", 00:08:18.610 "raid_level": "raid0", 00:08:18.610 "superblock": false, 00:08:18.610 "num_base_bdevs": 3, 00:08:18.610 "num_base_bdevs_discovered": 2, 00:08:18.610 "num_base_bdevs_operational": 3, 00:08:18.610 "base_bdevs_list": [ 00:08:18.610 { 00:08:18.610 "name": "BaseBdev1", 00:08:18.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.610 "is_configured": false, 00:08:18.610 "data_offset": 0, 00:08:18.610 "data_size": 0 00:08:18.610 }, 00:08:18.610 { 00:08:18.610 "name": "BaseBdev2", 00:08:18.610 "uuid": "8c8da0a1-bb0e-4765-8f87-4d903278666e", 00:08:18.610 "is_configured": true, 00:08:18.610 "data_offset": 0, 00:08:18.610 "data_size": 65536 00:08:18.610 }, 00:08:18.610 { 00:08:18.610 "name": "BaseBdev3", 00:08:18.610 "uuid": "64cc2228-bcd2-49e8-a35e-8d3dbe8c3b28", 00:08:18.610 "is_configured": true, 00:08:18.610 "data_offset": 0, 00:08:18.610 "data_size": 65536 00:08:18.610 } 00:08:18.610 ] 00:08:18.610 }' 00:08:18.610 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.610 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.179 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:19.179 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.179 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.179 [2024-12-13 04:24:18.989205] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:19.179 04:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.179 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:19.179 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.179 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.179 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:19.179 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.179 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.180 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.180 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.180 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.180 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.180 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.180 04:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.180 04:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.180 04:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.180 04:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.180 04:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.180 "name": "Existed_Raid", 00:08:19.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.180 "strip_size_kb": 64, 00:08:19.180 "state": "configuring", 00:08:19.180 "raid_level": "raid0", 00:08:19.180 "superblock": false, 00:08:19.180 "num_base_bdevs": 3, 00:08:19.180 "num_base_bdevs_discovered": 1, 00:08:19.180 "num_base_bdevs_operational": 3, 00:08:19.180 "base_bdevs_list": [ 00:08:19.180 { 00:08:19.180 "name": "BaseBdev1", 00:08:19.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.180 "is_configured": false, 00:08:19.180 "data_offset": 0, 00:08:19.180 "data_size": 0 00:08:19.180 }, 00:08:19.180 { 00:08:19.180 "name": null, 00:08:19.180 "uuid": "8c8da0a1-bb0e-4765-8f87-4d903278666e", 00:08:19.180 "is_configured": false, 00:08:19.180 "data_offset": 0, 00:08:19.180 "data_size": 65536 00:08:19.180 }, 00:08:19.180 { 00:08:19.180 "name": "BaseBdev3", 00:08:19.180 "uuid": "64cc2228-bcd2-49e8-a35e-8d3dbe8c3b28", 00:08:19.180 "is_configured": true, 00:08:19.180 "data_offset": 0, 00:08:19.180 "data_size": 65536 00:08:19.180 } 00:08:19.180 ] 00:08:19.180 }' 00:08:19.180 04:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.180 04:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.750 04:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.750 04:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:19.750 04:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.750 04:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.750 04:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.750 04:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:19.750 04:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:19.750 04:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.750 04:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.750 [2024-12-13 04:24:19.533051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:19.750 BaseBdev1 00:08:19.750 04:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.750 04:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:19.750 04:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:19.750 04:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:19.750 04:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:19.750 04:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:19.750 04:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:19.750 04:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:19.750 04:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.750 04:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.750 04:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.750 04:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:19.750 04:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.750 04:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.750 [ 00:08:19.750 { 00:08:19.750 "name": "BaseBdev1", 00:08:19.750 "aliases": [ 00:08:19.750 "0f64553a-09b7-430c-af94-7c5847a343ee" 00:08:19.750 ], 00:08:19.750 "product_name": "Malloc disk", 00:08:19.750 "block_size": 512, 00:08:19.750 "num_blocks": 65536, 00:08:19.750 "uuid": "0f64553a-09b7-430c-af94-7c5847a343ee", 00:08:19.750 "assigned_rate_limits": { 00:08:19.750 "rw_ios_per_sec": 0, 00:08:19.750 "rw_mbytes_per_sec": 0, 00:08:19.750 "r_mbytes_per_sec": 0, 00:08:19.750 "w_mbytes_per_sec": 0 00:08:19.750 }, 00:08:19.750 "claimed": true, 00:08:19.750 "claim_type": "exclusive_write", 00:08:19.750 "zoned": false, 00:08:19.750 "supported_io_types": { 00:08:19.750 "read": true, 00:08:19.750 "write": true, 00:08:19.750 "unmap": true, 00:08:19.750 "flush": true, 00:08:19.750 "reset": true, 00:08:19.750 "nvme_admin": false, 00:08:19.750 "nvme_io": false, 00:08:19.750 "nvme_io_md": false, 00:08:19.750 "write_zeroes": true, 00:08:19.750 "zcopy": true, 00:08:19.750 "get_zone_info": false, 00:08:19.750 "zone_management": false, 00:08:19.750 "zone_append": false, 00:08:19.750 "compare": false, 00:08:19.750 "compare_and_write": false, 00:08:19.750 "abort": true, 00:08:19.750 "seek_hole": false, 00:08:19.750 "seek_data": false, 00:08:19.750 "copy": true, 00:08:19.750 "nvme_iov_md": false 00:08:19.750 }, 00:08:19.750 "memory_domains": [ 00:08:19.750 { 00:08:19.750 "dma_device_id": "system", 00:08:19.750 "dma_device_type": 1 00:08:19.750 }, 00:08:19.750 { 00:08:19.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.750 "dma_device_type": 2 00:08:19.750 } 00:08:19.750 ], 00:08:19.750 "driver_specific": {} 00:08:19.750 } 00:08:19.750 ] 00:08:19.750 04:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.750 04:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:19.750 04:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:19.750 04:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.750 04:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.750 04:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:19.750 04:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.750 04:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.750 04:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.750 04:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.750 04:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.750 04:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.750 04:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.750 04:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.750 04:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.750 04:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.750 04:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.750 04:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.750 "name": "Existed_Raid", 00:08:19.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.751 "strip_size_kb": 64, 00:08:19.751 "state": "configuring", 00:08:19.751 "raid_level": "raid0", 00:08:19.751 "superblock": false, 00:08:19.751 "num_base_bdevs": 3, 00:08:19.751 "num_base_bdevs_discovered": 2, 00:08:19.751 "num_base_bdevs_operational": 3, 00:08:19.751 "base_bdevs_list": [ 00:08:19.751 { 00:08:19.751 "name": "BaseBdev1", 00:08:19.751 "uuid": "0f64553a-09b7-430c-af94-7c5847a343ee", 00:08:19.751 "is_configured": true, 00:08:19.751 "data_offset": 0, 00:08:19.751 "data_size": 65536 00:08:19.751 }, 00:08:19.751 { 00:08:19.751 "name": null, 00:08:19.751 "uuid": "8c8da0a1-bb0e-4765-8f87-4d903278666e", 00:08:19.751 "is_configured": false, 00:08:19.751 "data_offset": 0, 00:08:19.751 "data_size": 65536 00:08:19.751 }, 00:08:19.751 { 00:08:19.751 "name": "BaseBdev3", 00:08:19.751 "uuid": "64cc2228-bcd2-49e8-a35e-8d3dbe8c3b28", 00:08:19.751 "is_configured": true, 00:08:19.751 "data_offset": 0, 00:08:19.751 "data_size": 65536 00:08:19.751 } 00:08:19.751 ] 00:08:19.751 }' 00:08:19.751 04:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.751 04:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.011 04:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.011 04:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:20.011 04:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.011 04:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.270 04:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.270 04:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:20.270 04:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:20.270 04:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.270 04:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.270 [2024-12-13 04:24:20.068208] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:20.270 04:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.270 04:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:20.270 04:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:20.270 04:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:20.270 04:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:20.270 04:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.270 04:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:20.270 04:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.270 04:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.270 04:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.270 04:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.270 04:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.270 04:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.270 04:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:20.270 04:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.270 04:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.270 04:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.270 "name": "Existed_Raid", 00:08:20.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.270 "strip_size_kb": 64, 00:08:20.270 "state": "configuring", 00:08:20.270 "raid_level": "raid0", 00:08:20.270 "superblock": false, 00:08:20.270 "num_base_bdevs": 3, 00:08:20.270 "num_base_bdevs_discovered": 1, 00:08:20.270 "num_base_bdevs_operational": 3, 00:08:20.270 "base_bdevs_list": [ 00:08:20.270 { 00:08:20.270 "name": "BaseBdev1", 00:08:20.270 "uuid": "0f64553a-09b7-430c-af94-7c5847a343ee", 00:08:20.270 "is_configured": true, 00:08:20.270 "data_offset": 0, 00:08:20.270 "data_size": 65536 00:08:20.270 }, 00:08:20.270 { 00:08:20.270 "name": null, 00:08:20.270 "uuid": "8c8da0a1-bb0e-4765-8f87-4d903278666e", 00:08:20.270 "is_configured": false, 00:08:20.270 "data_offset": 0, 00:08:20.270 "data_size": 65536 00:08:20.270 }, 00:08:20.270 { 00:08:20.270 "name": null, 00:08:20.270 "uuid": "64cc2228-bcd2-49e8-a35e-8d3dbe8c3b28", 00:08:20.270 "is_configured": false, 00:08:20.270 "data_offset": 0, 00:08:20.270 "data_size": 65536 00:08:20.270 } 00:08:20.270 ] 00:08:20.270 }' 00:08:20.270 04:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.270 04:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.530 04:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:20.530 04:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.530 04:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.530 04:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.790 04:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.790 04:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:20.790 04:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:20.790 04:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.790 04:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.790 [2024-12-13 04:24:20.583351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:20.790 04:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.790 04:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:20.790 04:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:20.790 04:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:20.790 04:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:20.790 04:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.790 04:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:20.790 04:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.790 04:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.790 04:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.790 04:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.790 04:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.790 04:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:20.790 04:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.790 04:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.790 04:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.790 04:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.790 "name": "Existed_Raid", 00:08:20.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.790 "strip_size_kb": 64, 00:08:20.790 "state": "configuring", 00:08:20.790 "raid_level": "raid0", 00:08:20.790 "superblock": false, 00:08:20.790 "num_base_bdevs": 3, 00:08:20.790 "num_base_bdevs_discovered": 2, 00:08:20.790 "num_base_bdevs_operational": 3, 00:08:20.790 "base_bdevs_list": [ 00:08:20.790 { 00:08:20.790 "name": "BaseBdev1", 00:08:20.790 "uuid": "0f64553a-09b7-430c-af94-7c5847a343ee", 00:08:20.790 "is_configured": true, 00:08:20.790 "data_offset": 0, 00:08:20.790 "data_size": 65536 00:08:20.790 }, 00:08:20.790 { 00:08:20.790 "name": null, 00:08:20.790 "uuid": "8c8da0a1-bb0e-4765-8f87-4d903278666e", 00:08:20.790 "is_configured": false, 00:08:20.790 "data_offset": 0, 00:08:20.790 "data_size": 65536 00:08:20.790 }, 00:08:20.790 { 00:08:20.790 "name": "BaseBdev3", 00:08:20.790 "uuid": "64cc2228-bcd2-49e8-a35e-8d3dbe8c3b28", 00:08:20.790 "is_configured": true, 00:08:20.791 "data_offset": 0, 00:08:20.791 "data_size": 65536 00:08:20.791 } 00:08:20.791 ] 00:08:20.791 }' 00:08:20.791 04:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.791 04:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.050 04:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:21.050 04:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.050 04:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.050 04:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.050 04:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.050 04:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:21.050 04:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:21.050 04:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.050 04:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.051 [2024-12-13 04:24:21.030636] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:21.051 04:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.051 04:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:21.051 04:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.051 04:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:21.051 04:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:21.051 04:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.051 04:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:21.051 04:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.051 04:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.051 04:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.051 04:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.051 04:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.051 04:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.051 04:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.051 04:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.326 04:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.326 04:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.326 "name": "Existed_Raid", 00:08:21.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.326 "strip_size_kb": 64, 00:08:21.326 "state": "configuring", 00:08:21.326 "raid_level": "raid0", 00:08:21.326 "superblock": false, 00:08:21.326 "num_base_bdevs": 3, 00:08:21.326 "num_base_bdevs_discovered": 1, 00:08:21.326 "num_base_bdevs_operational": 3, 00:08:21.326 "base_bdevs_list": [ 00:08:21.326 { 00:08:21.326 "name": null, 00:08:21.326 "uuid": "0f64553a-09b7-430c-af94-7c5847a343ee", 00:08:21.326 "is_configured": false, 00:08:21.326 "data_offset": 0, 00:08:21.326 "data_size": 65536 00:08:21.326 }, 00:08:21.326 { 00:08:21.326 "name": null, 00:08:21.326 "uuid": "8c8da0a1-bb0e-4765-8f87-4d903278666e", 00:08:21.326 "is_configured": false, 00:08:21.326 "data_offset": 0, 00:08:21.326 "data_size": 65536 00:08:21.326 }, 00:08:21.326 { 00:08:21.326 "name": "BaseBdev3", 00:08:21.326 "uuid": "64cc2228-bcd2-49e8-a35e-8d3dbe8c3b28", 00:08:21.326 "is_configured": true, 00:08:21.326 "data_offset": 0, 00:08:21.326 "data_size": 65536 00:08:21.326 } 00:08:21.326 ] 00:08:21.326 }' 00:08:21.326 04:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.326 04:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.586 04:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.586 04:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.586 04:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.586 04:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:21.586 04:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.586 04:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:21.586 04:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:21.586 04:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.586 04:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.586 [2024-12-13 04:24:21.541703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:21.586 04:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.586 04:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:21.586 04:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.586 04:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:21.586 04:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:21.587 04:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.587 04:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:21.587 04:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.587 04:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.587 04:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.587 04:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.587 04:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.587 04:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.587 04:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.587 04:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.587 04:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.587 04:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.587 "name": "Existed_Raid", 00:08:21.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.587 "strip_size_kb": 64, 00:08:21.587 "state": "configuring", 00:08:21.587 "raid_level": "raid0", 00:08:21.587 "superblock": false, 00:08:21.587 "num_base_bdevs": 3, 00:08:21.587 "num_base_bdevs_discovered": 2, 00:08:21.587 "num_base_bdevs_operational": 3, 00:08:21.587 "base_bdevs_list": [ 00:08:21.587 { 00:08:21.587 "name": null, 00:08:21.587 "uuid": "0f64553a-09b7-430c-af94-7c5847a343ee", 00:08:21.587 "is_configured": false, 00:08:21.587 "data_offset": 0, 00:08:21.587 "data_size": 65536 00:08:21.587 }, 00:08:21.587 { 00:08:21.587 "name": "BaseBdev2", 00:08:21.587 "uuid": "8c8da0a1-bb0e-4765-8f87-4d903278666e", 00:08:21.587 "is_configured": true, 00:08:21.587 "data_offset": 0, 00:08:21.587 "data_size": 65536 00:08:21.587 }, 00:08:21.587 { 00:08:21.587 "name": "BaseBdev3", 00:08:21.587 "uuid": "64cc2228-bcd2-49e8-a35e-8d3dbe8c3b28", 00:08:21.587 "is_configured": true, 00:08:21.587 "data_offset": 0, 00:08:21.587 "data_size": 65536 00:08:21.587 } 00:08:21.587 ] 00:08:21.587 }' 00:08:21.587 04:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.587 04:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.158 04:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.158 04:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:22.158 04:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.158 04:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.158 04:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.158 04:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:22.158 04:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.158 04:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.158 04:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:22.158 04:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.158 04:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.158 04:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0f64553a-09b7-430c-af94-7c5847a343ee 00:08:22.158 04:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.158 04:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.158 [2024-12-13 04:24:22.101466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:22.158 [2024-12-13 04:24:22.101608] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:22.158 [2024-12-13 04:24:22.101625] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:22.158 [2024-12-13 04:24:22.101915] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:08:22.158 [2024-12-13 04:24:22.102063] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:22.158 [2024-12-13 04:24:22.102073] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:08:22.158 [2024-12-13 04:24:22.102282] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:22.158 NewBaseBdev 00:08:22.158 04:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.158 04:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:22.158 04:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:22.158 04:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:22.158 04:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:22.158 04:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:22.158 04:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:22.158 04:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:22.158 04:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.158 04:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.158 04:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.158 04:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:22.158 04:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.158 04:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.158 [ 00:08:22.158 { 00:08:22.158 "name": "NewBaseBdev", 00:08:22.158 "aliases": [ 00:08:22.158 "0f64553a-09b7-430c-af94-7c5847a343ee" 00:08:22.158 ], 00:08:22.158 "product_name": "Malloc disk", 00:08:22.158 "block_size": 512, 00:08:22.158 "num_blocks": 65536, 00:08:22.158 "uuid": "0f64553a-09b7-430c-af94-7c5847a343ee", 00:08:22.158 "assigned_rate_limits": { 00:08:22.158 "rw_ios_per_sec": 0, 00:08:22.158 "rw_mbytes_per_sec": 0, 00:08:22.158 "r_mbytes_per_sec": 0, 00:08:22.158 "w_mbytes_per_sec": 0 00:08:22.158 }, 00:08:22.158 "claimed": true, 00:08:22.158 "claim_type": "exclusive_write", 00:08:22.158 "zoned": false, 00:08:22.158 "supported_io_types": { 00:08:22.158 "read": true, 00:08:22.158 "write": true, 00:08:22.158 "unmap": true, 00:08:22.158 "flush": true, 00:08:22.158 "reset": true, 00:08:22.158 "nvme_admin": false, 00:08:22.158 "nvme_io": false, 00:08:22.158 "nvme_io_md": false, 00:08:22.158 "write_zeroes": true, 00:08:22.158 "zcopy": true, 00:08:22.158 "get_zone_info": false, 00:08:22.158 "zone_management": false, 00:08:22.158 "zone_append": false, 00:08:22.158 "compare": false, 00:08:22.158 "compare_and_write": false, 00:08:22.158 "abort": true, 00:08:22.158 "seek_hole": false, 00:08:22.158 "seek_data": false, 00:08:22.158 "copy": true, 00:08:22.158 "nvme_iov_md": false 00:08:22.158 }, 00:08:22.158 "memory_domains": [ 00:08:22.158 { 00:08:22.158 "dma_device_id": "system", 00:08:22.158 "dma_device_type": 1 00:08:22.158 }, 00:08:22.158 { 00:08:22.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.158 "dma_device_type": 2 00:08:22.158 } 00:08:22.158 ], 00:08:22.158 "driver_specific": {} 00:08:22.158 } 00:08:22.158 ] 00:08:22.158 04:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.158 04:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:22.158 04:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:22.158 04:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.158 04:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:22.158 04:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:22.158 04:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.158 04:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:22.158 04:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.158 04:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.158 04:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.158 04:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.158 04:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.158 04:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.158 04:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.158 04:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.158 04:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.419 04:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.419 "name": "Existed_Raid", 00:08:22.419 "uuid": "bf09882d-9595-4eb9-b356-559b776a088d", 00:08:22.419 "strip_size_kb": 64, 00:08:22.419 "state": "online", 00:08:22.419 "raid_level": "raid0", 00:08:22.419 "superblock": false, 00:08:22.419 "num_base_bdevs": 3, 00:08:22.419 "num_base_bdevs_discovered": 3, 00:08:22.419 "num_base_bdevs_operational": 3, 00:08:22.419 "base_bdevs_list": [ 00:08:22.419 { 00:08:22.419 "name": "NewBaseBdev", 00:08:22.419 "uuid": "0f64553a-09b7-430c-af94-7c5847a343ee", 00:08:22.419 "is_configured": true, 00:08:22.419 "data_offset": 0, 00:08:22.419 "data_size": 65536 00:08:22.419 }, 00:08:22.419 { 00:08:22.419 "name": "BaseBdev2", 00:08:22.419 "uuid": "8c8da0a1-bb0e-4765-8f87-4d903278666e", 00:08:22.419 "is_configured": true, 00:08:22.419 "data_offset": 0, 00:08:22.419 "data_size": 65536 00:08:22.419 }, 00:08:22.419 { 00:08:22.419 "name": "BaseBdev3", 00:08:22.419 "uuid": "64cc2228-bcd2-49e8-a35e-8d3dbe8c3b28", 00:08:22.419 "is_configured": true, 00:08:22.419 "data_offset": 0, 00:08:22.419 "data_size": 65536 00:08:22.419 } 00:08:22.419 ] 00:08:22.419 }' 00:08:22.419 04:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.419 04:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.680 04:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:22.680 04:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:22.680 04:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:22.680 04:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:22.680 04:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:22.680 04:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:22.680 04:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:22.680 04:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:22.680 04:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.680 04:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.680 [2024-12-13 04:24:22.548973] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:22.680 04:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.680 04:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:22.680 "name": "Existed_Raid", 00:08:22.680 "aliases": [ 00:08:22.680 "bf09882d-9595-4eb9-b356-559b776a088d" 00:08:22.680 ], 00:08:22.680 "product_name": "Raid Volume", 00:08:22.680 "block_size": 512, 00:08:22.680 "num_blocks": 196608, 00:08:22.680 "uuid": "bf09882d-9595-4eb9-b356-559b776a088d", 00:08:22.680 "assigned_rate_limits": { 00:08:22.680 "rw_ios_per_sec": 0, 00:08:22.680 "rw_mbytes_per_sec": 0, 00:08:22.680 "r_mbytes_per_sec": 0, 00:08:22.680 "w_mbytes_per_sec": 0 00:08:22.680 }, 00:08:22.680 "claimed": false, 00:08:22.680 "zoned": false, 00:08:22.680 "supported_io_types": { 00:08:22.680 "read": true, 00:08:22.680 "write": true, 00:08:22.680 "unmap": true, 00:08:22.680 "flush": true, 00:08:22.680 "reset": true, 00:08:22.680 "nvme_admin": false, 00:08:22.680 "nvme_io": false, 00:08:22.680 "nvme_io_md": false, 00:08:22.680 "write_zeroes": true, 00:08:22.680 "zcopy": false, 00:08:22.680 "get_zone_info": false, 00:08:22.680 "zone_management": false, 00:08:22.680 "zone_append": false, 00:08:22.680 "compare": false, 00:08:22.680 "compare_and_write": false, 00:08:22.680 "abort": false, 00:08:22.680 "seek_hole": false, 00:08:22.680 "seek_data": false, 00:08:22.680 "copy": false, 00:08:22.680 "nvme_iov_md": false 00:08:22.680 }, 00:08:22.680 "memory_domains": [ 00:08:22.680 { 00:08:22.680 "dma_device_id": "system", 00:08:22.680 "dma_device_type": 1 00:08:22.680 }, 00:08:22.680 { 00:08:22.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.680 "dma_device_type": 2 00:08:22.680 }, 00:08:22.680 { 00:08:22.680 "dma_device_id": "system", 00:08:22.680 "dma_device_type": 1 00:08:22.680 }, 00:08:22.680 { 00:08:22.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.680 "dma_device_type": 2 00:08:22.680 }, 00:08:22.680 { 00:08:22.680 "dma_device_id": "system", 00:08:22.680 "dma_device_type": 1 00:08:22.680 }, 00:08:22.680 { 00:08:22.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.680 "dma_device_type": 2 00:08:22.680 } 00:08:22.680 ], 00:08:22.680 "driver_specific": { 00:08:22.680 "raid": { 00:08:22.680 "uuid": "bf09882d-9595-4eb9-b356-559b776a088d", 00:08:22.680 "strip_size_kb": 64, 00:08:22.680 "state": "online", 00:08:22.680 "raid_level": "raid0", 00:08:22.680 "superblock": false, 00:08:22.680 "num_base_bdevs": 3, 00:08:22.680 "num_base_bdevs_discovered": 3, 00:08:22.680 "num_base_bdevs_operational": 3, 00:08:22.680 "base_bdevs_list": [ 00:08:22.680 { 00:08:22.680 "name": "NewBaseBdev", 00:08:22.680 "uuid": "0f64553a-09b7-430c-af94-7c5847a343ee", 00:08:22.680 "is_configured": true, 00:08:22.680 "data_offset": 0, 00:08:22.680 "data_size": 65536 00:08:22.680 }, 00:08:22.680 { 00:08:22.680 "name": "BaseBdev2", 00:08:22.680 "uuid": "8c8da0a1-bb0e-4765-8f87-4d903278666e", 00:08:22.680 "is_configured": true, 00:08:22.680 "data_offset": 0, 00:08:22.680 "data_size": 65536 00:08:22.680 }, 00:08:22.680 { 00:08:22.680 "name": "BaseBdev3", 00:08:22.680 "uuid": "64cc2228-bcd2-49e8-a35e-8d3dbe8c3b28", 00:08:22.680 "is_configured": true, 00:08:22.680 "data_offset": 0, 00:08:22.680 "data_size": 65536 00:08:22.680 } 00:08:22.680 ] 00:08:22.680 } 00:08:22.680 } 00:08:22.680 }' 00:08:22.680 04:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:22.680 04:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:22.680 BaseBdev2 00:08:22.680 BaseBdev3' 00:08:22.680 04:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:22.680 04:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:22.680 04:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:22.680 04:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:22.680 04:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:22.680 04:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.680 04:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.680 04:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.680 04:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:22.680 04:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:22.680 04:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:22.680 04:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:22.940 04:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:22.941 04:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.941 04:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.941 04:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.941 04:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:22.941 04:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:22.941 04:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:22.941 04:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:22.941 04:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:22.941 04:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.941 04:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.941 04:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.941 04:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:22.941 04:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:22.941 04:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:22.941 04:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.941 04:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.941 [2024-12-13 04:24:22.792473] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:22.941 [2024-12-13 04:24:22.792497] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:22.941 [2024-12-13 04:24:22.792579] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:22.941 [2024-12-13 04:24:22.792634] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:22.941 [2024-12-13 04:24:22.792647] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:08:22.941 04:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.941 04:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 76698 00:08:22.941 04:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 76698 ']' 00:08:22.941 04:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 76698 00:08:22.941 04:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:22.941 04:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:22.941 04:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76698 00:08:22.941 killing process with pid 76698 00:08:22.941 04:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:22.941 04:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:22.941 04:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76698' 00:08:22.941 04:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 76698 00:08:22.941 [2024-12-13 04:24:22.829932] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:22.941 04:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 76698 00:08:22.941 [2024-12-13 04:24:22.888358] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:23.201 04:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:23.201 00:08:23.201 real 0m8.886s 00:08:23.201 user 0m14.915s 00:08:23.201 sys 0m1.937s 00:08:23.201 04:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.201 04:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.201 ************************************ 00:08:23.201 END TEST raid_state_function_test 00:08:23.201 ************************************ 00:08:23.462 04:24:23 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:23.462 04:24:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:23.462 04:24:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.462 04:24:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:23.462 ************************************ 00:08:23.462 START TEST raid_state_function_test_sb 00:08:23.462 ************************************ 00:08:23.462 04:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:08:23.462 04:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:23.462 04:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:23.462 04:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:23.462 04:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:23.462 04:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:23.462 04:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:23.462 04:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:23.462 04:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:23.462 04:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:23.462 04:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:23.462 04:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:23.462 04:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:23.462 04:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:23.462 04:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:23.462 04:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:23.462 04:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:23.462 04:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:23.462 04:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:23.462 04:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:23.462 04:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:23.462 04:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:23.462 04:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:23.462 04:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:23.462 04:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:23.462 04:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:23.462 04:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:23.462 04:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=77308 00:08:23.462 04:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:23.462 04:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 77308' 00:08:23.462 Process raid pid: 77308 00:08:23.462 04:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 77308 00:08:23.462 04:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 77308 ']' 00:08:23.462 04:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.462 04:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:23.462 04:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.462 04:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:23.462 04:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.462 [2024-12-13 04:24:23.378092] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:23.462 [2024-12-13 04:24:23.378304] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.723 [2024-12-13 04:24:23.511610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.723 [2024-12-13 04:24:23.550280] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.723 [2024-12-13 04:24:23.627399] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:23.723 [2024-12-13 04:24:23.627574] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:24.294 04:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:24.294 04:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:24.294 04:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:24.294 04:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.294 04:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.294 [2024-12-13 04:24:24.206545] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:24.294 [2024-12-13 04:24:24.206657] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:24.294 [2024-12-13 04:24:24.206680] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:24.294 [2024-12-13 04:24:24.206693] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:24.294 [2024-12-13 04:24:24.206699] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:24.294 [2024-12-13 04:24:24.206711] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:24.294 04:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.294 04:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:24.294 04:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.294 04:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:24.294 04:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:24.294 04:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.294 04:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:24.294 04:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.294 04:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.294 04:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.294 04:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.294 04:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.294 04:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.294 04:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.294 04:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.294 04:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.294 04:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.294 "name": "Existed_Raid", 00:08:24.294 "uuid": "b271181c-264e-4a1d-a82f-f020f266464b", 00:08:24.294 "strip_size_kb": 64, 00:08:24.294 "state": "configuring", 00:08:24.294 "raid_level": "raid0", 00:08:24.294 "superblock": true, 00:08:24.294 "num_base_bdevs": 3, 00:08:24.294 "num_base_bdevs_discovered": 0, 00:08:24.294 "num_base_bdevs_operational": 3, 00:08:24.294 "base_bdevs_list": [ 00:08:24.294 { 00:08:24.294 "name": "BaseBdev1", 00:08:24.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.294 "is_configured": false, 00:08:24.294 "data_offset": 0, 00:08:24.294 "data_size": 0 00:08:24.294 }, 00:08:24.294 { 00:08:24.294 "name": "BaseBdev2", 00:08:24.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.294 "is_configured": false, 00:08:24.294 "data_offset": 0, 00:08:24.294 "data_size": 0 00:08:24.294 }, 00:08:24.294 { 00:08:24.294 "name": "BaseBdev3", 00:08:24.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.294 "is_configured": false, 00:08:24.294 "data_offset": 0, 00:08:24.294 "data_size": 0 00:08:24.294 } 00:08:24.294 ] 00:08:24.294 }' 00:08:24.294 04:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.294 04:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.864 04:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:24.864 04:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.864 04:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.864 [2024-12-13 04:24:24.649714] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:24.864 [2024-12-13 04:24:24.649834] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:08:24.864 04:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.864 04:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:24.864 04:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.864 04:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.864 [2024-12-13 04:24:24.657724] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:24.864 [2024-12-13 04:24:24.657821] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:24.865 [2024-12-13 04:24:24.657848] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:24.865 [2024-12-13 04:24:24.657871] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:24.865 [2024-12-13 04:24:24.657889] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:24.865 [2024-12-13 04:24:24.657910] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:24.865 04:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.865 04:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:24.865 04:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.865 04:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.865 [2024-12-13 04:24:24.680677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:24.865 BaseBdev1 00:08:24.865 04:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.865 04:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:24.865 04:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:24.865 04:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:24.865 04:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:24.865 04:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:24.865 04:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:24.865 04:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:24.865 04:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.865 04:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.865 04:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.865 04:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:24.865 04:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.865 04:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.865 [ 00:08:24.865 { 00:08:24.865 "name": "BaseBdev1", 00:08:24.865 "aliases": [ 00:08:24.865 "f7b97428-894e-4f68-a6f2-6b5ea06d7e51" 00:08:24.865 ], 00:08:24.865 "product_name": "Malloc disk", 00:08:24.865 "block_size": 512, 00:08:24.865 "num_blocks": 65536, 00:08:24.865 "uuid": "f7b97428-894e-4f68-a6f2-6b5ea06d7e51", 00:08:24.865 "assigned_rate_limits": { 00:08:24.865 "rw_ios_per_sec": 0, 00:08:24.865 "rw_mbytes_per_sec": 0, 00:08:24.865 "r_mbytes_per_sec": 0, 00:08:24.865 "w_mbytes_per_sec": 0 00:08:24.865 }, 00:08:24.865 "claimed": true, 00:08:24.865 "claim_type": "exclusive_write", 00:08:24.865 "zoned": false, 00:08:24.865 "supported_io_types": { 00:08:24.865 "read": true, 00:08:24.865 "write": true, 00:08:24.865 "unmap": true, 00:08:24.865 "flush": true, 00:08:24.865 "reset": true, 00:08:24.865 "nvme_admin": false, 00:08:24.865 "nvme_io": false, 00:08:24.865 "nvme_io_md": false, 00:08:24.865 "write_zeroes": true, 00:08:24.865 "zcopy": true, 00:08:24.865 "get_zone_info": false, 00:08:24.865 "zone_management": false, 00:08:24.865 "zone_append": false, 00:08:24.865 "compare": false, 00:08:24.865 "compare_and_write": false, 00:08:24.865 "abort": true, 00:08:24.865 "seek_hole": false, 00:08:24.865 "seek_data": false, 00:08:24.865 "copy": true, 00:08:24.865 "nvme_iov_md": false 00:08:24.865 }, 00:08:24.865 "memory_domains": [ 00:08:24.865 { 00:08:24.865 "dma_device_id": "system", 00:08:24.865 "dma_device_type": 1 00:08:24.865 }, 00:08:24.865 { 00:08:24.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.865 "dma_device_type": 2 00:08:24.865 } 00:08:24.865 ], 00:08:24.865 "driver_specific": {} 00:08:24.865 } 00:08:24.865 ] 00:08:24.865 04:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.865 04:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:24.865 04:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:24.865 04:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.865 04:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:24.865 04:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:24.865 04:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.865 04:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:24.865 04:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.865 04:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.865 04:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.865 04:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.865 04:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.865 04:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.865 04:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.865 04:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.865 04:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.865 04:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.865 "name": "Existed_Raid", 00:08:24.865 "uuid": "66330d38-6c2c-4fdd-b46b-1cfbffc77606", 00:08:24.865 "strip_size_kb": 64, 00:08:24.865 "state": "configuring", 00:08:24.865 "raid_level": "raid0", 00:08:24.865 "superblock": true, 00:08:24.865 "num_base_bdevs": 3, 00:08:24.865 "num_base_bdevs_discovered": 1, 00:08:24.865 "num_base_bdevs_operational": 3, 00:08:24.865 "base_bdevs_list": [ 00:08:24.865 { 00:08:24.865 "name": "BaseBdev1", 00:08:24.865 "uuid": "f7b97428-894e-4f68-a6f2-6b5ea06d7e51", 00:08:24.865 "is_configured": true, 00:08:24.865 "data_offset": 2048, 00:08:24.865 "data_size": 63488 00:08:24.865 }, 00:08:24.865 { 00:08:24.865 "name": "BaseBdev2", 00:08:24.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.865 "is_configured": false, 00:08:24.865 "data_offset": 0, 00:08:24.865 "data_size": 0 00:08:24.865 }, 00:08:24.865 { 00:08:24.865 "name": "BaseBdev3", 00:08:24.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.865 "is_configured": false, 00:08:24.865 "data_offset": 0, 00:08:24.865 "data_size": 0 00:08:24.865 } 00:08:24.865 ] 00:08:24.865 }' 00:08:24.865 04:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.865 04:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.435 04:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:25.435 04:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.435 04:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.435 [2024-12-13 04:24:25.168039] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:25.435 [2024-12-13 04:24:25.168097] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:08:25.435 04:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.435 04:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:25.435 04:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.435 04:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.435 [2024-12-13 04:24:25.180045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:25.435 [2024-12-13 04:24:25.182236] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:25.435 [2024-12-13 04:24:25.182278] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:25.435 [2024-12-13 04:24:25.182287] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:25.435 [2024-12-13 04:24:25.182296] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:25.435 04:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.435 04:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:25.435 04:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:25.435 04:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:25.435 04:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.435 04:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.435 04:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:25.435 04:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.435 04:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.435 04:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.435 04:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.435 04:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.435 04:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.435 04:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.435 04:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.435 04:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.435 04:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.435 04:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.435 04:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.435 "name": "Existed_Raid", 00:08:25.435 "uuid": "b4d42f7a-8fd5-4dd7-bdfc-9cc9cf675e07", 00:08:25.435 "strip_size_kb": 64, 00:08:25.435 "state": "configuring", 00:08:25.435 "raid_level": "raid0", 00:08:25.435 "superblock": true, 00:08:25.435 "num_base_bdevs": 3, 00:08:25.435 "num_base_bdevs_discovered": 1, 00:08:25.435 "num_base_bdevs_operational": 3, 00:08:25.435 "base_bdevs_list": [ 00:08:25.435 { 00:08:25.435 "name": "BaseBdev1", 00:08:25.435 "uuid": "f7b97428-894e-4f68-a6f2-6b5ea06d7e51", 00:08:25.435 "is_configured": true, 00:08:25.435 "data_offset": 2048, 00:08:25.435 "data_size": 63488 00:08:25.435 }, 00:08:25.435 { 00:08:25.435 "name": "BaseBdev2", 00:08:25.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.435 "is_configured": false, 00:08:25.435 "data_offset": 0, 00:08:25.435 "data_size": 0 00:08:25.435 }, 00:08:25.435 { 00:08:25.435 "name": "BaseBdev3", 00:08:25.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.435 "is_configured": false, 00:08:25.435 "data_offset": 0, 00:08:25.435 "data_size": 0 00:08:25.435 } 00:08:25.435 ] 00:08:25.435 }' 00:08:25.435 04:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.435 04:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.695 04:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:25.695 04:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.695 04:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.695 [2024-12-13 04:24:25.635903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:25.695 BaseBdev2 00:08:25.695 04:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.695 04:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:25.695 04:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:25.695 04:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:25.695 04:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:25.695 04:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:25.695 04:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:25.695 04:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:25.695 04:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.695 04:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.695 04:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.695 04:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:25.695 04:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.695 04:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.695 [ 00:08:25.695 { 00:08:25.695 "name": "BaseBdev2", 00:08:25.695 "aliases": [ 00:08:25.695 "8535d4af-451a-4856-899d-75872e957241" 00:08:25.695 ], 00:08:25.695 "product_name": "Malloc disk", 00:08:25.695 "block_size": 512, 00:08:25.695 "num_blocks": 65536, 00:08:25.695 "uuid": "8535d4af-451a-4856-899d-75872e957241", 00:08:25.695 "assigned_rate_limits": { 00:08:25.695 "rw_ios_per_sec": 0, 00:08:25.695 "rw_mbytes_per_sec": 0, 00:08:25.695 "r_mbytes_per_sec": 0, 00:08:25.695 "w_mbytes_per_sec": 0 00:08:25.695 }, 00:08:25.695 "claimed": true, 00:08:25.695 "claim_type": "exclusive_write", 00:08:25.695 "zoned": false, 00:08:25.695 "supported_io_types": { 00:08:25.695 "read": true, 00:08:25.695 "write": true, 00:08:25.695 "unmap": true, 00:08:25.695 "flush": true, 00:08:25.695 "reset": true, 00:08:25.695 "nvme_admin": false, 00:08:25.695 "nvme_io": false, 00:08:25.695 "nvme_io_md": false, 00:08:25.695 "write_zeroes": true, 00:08:25.695 "zcopy": true, 00:08:25.695 "get_zone_info": false, 00:08:25.695 "zone_management": false, 00:08:25.695 "zone_append": false, 00:08:25.695 "compare": false, 00:08:25.695 "compare_and_write": false, 00:08:25.695 "abort": true, 00:08:25.695 "seek_hole": false, 00:08:25.695 "seek_data": false, 00:08:25.695 "copy": true, 00:08:25.695 "nvme_iov_md": false 00:08:25.695 }, 00:08:25.695 "memory_domains": [ 00:08:25.695 { 00:08:25.695 "dma_device_id": "system", 00:08:25.695 "dma_device_type": 1 00:08:25.695 }, 00:08:25.695 { 00:08:25.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.695 "dma_device_type": 2 00:08:25.695 } 00:08:25.695 ], 00:08:25.695 "driver_specific": {} 00:08:25.695 } 00:08:25.695 ] 00:08:25.695 04:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.695 04:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:25.695 04:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:25.695 04:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:25.696 04:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:25.696 04:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.696 04:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.696 04:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:25.696 04:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.696 04:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.696 04:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.696 04:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.696 04:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.696 04:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.696 04:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.696 04:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.696 04:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.696 04:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.696 04:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.954 04:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.954 "name": "Existed_Raid", 00:08:25.954 "uuid": "b4d42f7a-8fd5-4dd7-bdfc-9cc9cf675e07", 00:08:25.954 "strip_size_kb": 64, 00:08:25.954 "state": "configuring", 00:08:25.955 "raid_level": "raid0", 00:08:25.955 "superblock": true, 00:08:25.955 "num_base_bdevs": 3, 00:08:25.955 "num_base_bdevs_discovered": 2, 00:08:25.955 "num_base_bdevs_operational": 3, 00:08:25.955 "base_bdevs_list": [ 00:08:25.955 { 00:08:25.955 "name": "BaseBdev1", 00:08:25.955 "uuid": "f7b97428-894e-4f68-a6f2-6b5ea06d7e51", 00:08:25.955 "is_configured": true, 00:08:25.955 "data_offset": 2048, 00:08:25.955 "data_size": 63488 00:08:25.955 }, 00:08:25.955 { 00:08:25.955 "name": "BaseBdev2", 00:08:25.955 "uuid": "8535d4af-451a-4856-899d-75872e957241", 00:08:25.955 "is_configured": true, 00:08:25.955 "data_offset": 2048, 00:08:25.955 "data_size": 63488 00:08:25.955 }, 00:08:25.955 { 00:08:25.955 "name": "BaseBdev3", 00:08:25.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.955 "is_configured": false, 00:08:25.955 "data_offset": 0, 00:08:25.955 "data_size": 0 00:08:25.955 } 00:08:25.955 ] 00:08:25.955 }' 00:08:25.955 04:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.955 04:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.214 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:26.214 04:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.214 04:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.214 [2024-12-13 04:24:26.093323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:26.214 [2024-12-13 04:24:26.094158] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:26.214 [2024-12-13 04:24:26.094237] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:26.214 BaseBdev3 00:08:26.214 [2024-12-13 04:24:26.095294] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:26.214 04:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.214 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:26.214 [2024-12-13 04:24:26.095829] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:26.214 [2024-12-13 04:24:26.095862] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:08:26.214 04:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:26.214 [2024-12-13 04:24:26.096175] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:26.214 04:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:26.214 04:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:26.214 04:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:26.214 04:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:26.214 04:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:26.214 04:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.214 04:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.214 04:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.214 04:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:26.214 04:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.214 04:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.214 [ 00:08:26.214 { 00:08:26.214 "name": "BaseBdev3", 00:08:26.214 "aliases": [ 00:08:26.214 "5c60b242-53b8-44ca-9061-d4aff37cd456" 00:08:26.214 ], 00:08:26.214 "product_name": "Malloc disk", 00:08:26.214 "block_size": 512, 00:08:26.214 "num_blocks": 65536, 00:08:26.214 "uuid": "5c60b242-53b8-44ca-9061-d4aff37cd456", 00:08:26.214 "assigned_rate_limits": { 00:08:26.214 "rw_ios_per_sec": 0, 00:08:26.214 "rw_mbytes_per_sec": 0, 00:08:26.214 "r_mbytes_per_sec": 0, 00:08:26.214 "w_mbytes_per_sec": 0 00:08:26.214 }, 00:08:26.214 "claimed": true, 00:08:26.214 "claim_type": "exclusive_write", 00:08:26.214 "zoned": false, 00:08:26.214 "supported_io_types": { 00:08:26.214 "read": true, 00:08:26.214 "write": true, 00:08:26.214 "unmap": true, 00:08:26.214 "flush": true, 00:08:26.214 "reset": true, 00:08:26.214 "nvme_admin": false, 00:08:26.214 "nvme_io": false, 00:08:26.214 "nvme_io_md": false, 00:08:26.214 "write_zeroes": true, 00:08:26.214 "zcopy": true, 00:08:26.214 "get_zone_info": false, 00:08:26.214 "zone_management": false, 00:08:26.214 "zone_append": false, 00:08:26.214 "compare": false, 00:08:26.214 "compare_and_write": false, 00:08:26.214 "abort": true, 00:08:26.214 "seek_hole": false, 00:08:26.214 "seek_data": false, 00:08:26.214 "copy": true, 00:08:26.214 "nvme_iov_md": false 00:08:26.214 }, 00:08:26.215 "memory_domains": [ 00:08:26.215 { 00:08:26.215 "dma_device_id": "system", 00:08:26.215 "dma_device_type": 1 00:08:26.215 }, 00:08:26.215 { 00:08:26.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.215 "dma_device_type": 2 00:08:26.215 } 00:08:26.215 ], 00:08:26.215 "driver_specific": {} 00:08:26.215 } 00:08:26.215 ] 00:08:26.215 04:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.215 04:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:26.215 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:26.215 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:26.215 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:26.215 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.215 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:26.215 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:26.215 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.215 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.215 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.215 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.215 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.215 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.215 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.215 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.215 04:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.215 04:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.215 04:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.215 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.215 "name": "Existed_Raid", 00:08:26.215 "uuid": "b4d42f7a-8fd5-4dd7-bdfc-9cc9cf675e07", 00:08:26.215 "strip_size_kb": 64, 00:08:26.215 "state": "online", 00:08:26.215 "raid_level": "raid0", 00:08:26.215 "superblock": true, 00:08:26.215 "num_base_bdevs": 3, 00:08:26.215 "num_base_bdevs_discovered": 3, 00:08:26.215 "num_base_bdevs_operational": 3, 00:08:26.215 "base_bdevs_list": [ 00:08:26.215 { 00:08:26.215 "name": "BaseBdev1", 00:08:26.215 "uuid": "f7b97428-894e-4f68-a6f2-6b5ea06d7e51", 00:08:26.215 "is_configured": true, 00:08:26.215 "data_offset": 2048, 00:08:26.215 "data_size": 63488 00:08:26.215 }, 00:08:26.215 { 00:08:26.215 "name": "BaseBdev2", 00:08:26.215 "uuid": "8535d4af-451a-4856-899d-75872e957241", 00:08:26.215 "is_configured": true, 00:08:26.215 "data_offset": 2048, 00:08:26.215 "data_size": 63488 00:08:26.215 }, 00:08:26.215 { 00:08:26.215 "name": "BaseBdev3", 00:08:26.215 "uuid": "5c60b242-53b8-44ca-9061-d4aff37cd456", 00:08:26.215 "is_configured": true, 00:08:26.215 "data_offset": 2048, 00:08:26.215 "data_size": 63488 00:08:26.215 } 00:08:26.215 ] 00:08:26.215 }' 00:08:26.215 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.215 04:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.785 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:26.785 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:26.785 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:26.785 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:26.785 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:26.785 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:26.785 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:26.785 04:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.785 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:26.785 04:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.786 [2024-12-13 04:24:26.520844] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:26.786 04:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.786 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:26.786 "name": "Existed_Raid", 00:08:26.786 "aliases": [ 00:08:26.786 "b4d42f7a-8fd5-4dd7-bdfc-9cc9cf675e07" 00:08:26.786 ], 00:08:26.786 "product_name": "Raid Volume", 00:08:26.786 "block_size": 512, 00:08:26.786 "num_blocks": 190464, 00:08:26.786 "uuid": "b4d42f7a-8fd5-4dd7-bdfc-9cc9cf675e07", 00:08:26.786 "assigned_rate_limits": { 00:08:26.786 "rw_ios_per_sec": 0, 00:08:26.786 "rw_mbytes_per_sec": 0, 00:08:26.786 "r_mbytes_per_sec": 0, 00:08:26.786 "w_mbytes_per_sec": 0 00:08:26.786 }, 00:08:26.786 "claimed": false, 00:08:26.786 "zoned": false, 00:08:26.786 "supported_io_types": { 00:08:26.786 "read": true, 00:08:26.786 "write": true, 00:08:26.786 "unmap": true, 00:08:26.786 "flush": true, 00:08:26.786 "reset": true, 00:08:26.786 "nvme_admin": false, 00:08:26.786 "nvme_io": false, 00:08:26.786 "nvme_io_md": false, 00:08:26.786 "write_zeroes": true, 00:08:26.786 "zcopy": false, 00:08:26.786 "get_zone_info": false, 00:08:26.786 "zone_management": false, 00:08:26.786 "zone_append": false, 00:08:26.786 "compare": false, 00:08:26.786 "compare_and_write": false, 00:08:26.786 "abort": false, 00:08:26.786 "seek_hole": false, 00:08:26.786 "seek_data": false, 00:08:26.786 "copy": false, 00:08:26.786 "nvme_iov_md": false 00:08:26.786 }, 00:08:26.786 "memory_domains": [ 00:08:26.786 { 00:08:26.786 "dma_device_id": "system", 00:08:26.786 "dma_device_type": 1 00:08:26.786 }, 00:08:26.786 { 00:08:26.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.786 "dma_device_type": 2 00:08:26.786 }, 00:08:26.786 { 00:08:26.786 "dma_device_id": "system", 00:08:26.786 "dma_device_type": 1 00:08:26.786 }, 00:08:26.786 { 00:08:26.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.786 "dma_device_type": 2 00:08:26.786 }, 00:08:26.786 { 00:08:26.786 "dma_device_id": "system", 00:08:26.786 "dma_device_type": 1 00:08:26.786 }, 00:08:26.786 { 00:08:26.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.786 "dma_device_type": 2 00:08:26.786 } 00:08:26.786 ], 00:08:26.786 "driver_specific": { 00:08:26.786 "raid": { 00:08:26.786 "uuid": "b4d42f7a-8fd5-4dd7-bdfc-9cc9cf675e07", 00:08:26.786 "strip_size_kb": 64, 00:08:26.786 "state": "online", 00:08:26.786 "raid_level": "raid0", 00:08:26.786 "superblock": true, 00:08:26.786 "num_base_bdevs": 3, 00:08:26.786 "num_base_bdevs_discovered": 3, 00:08:26.786 "num_base_bdevs_operational": 3, 00:08:26.786 "base_bdevs_list": [ 00:08:26.786 { 00:08:26.786 "name": "BaseBdev1", 00:08:26.786 "uuid": "f7b97428-894e-4f68-a6f2-6b5ea06d7e51", 00:08:26.786 "is_configured": true, 00:08:26.786 "data_offset": 2048, 00:08:26.786 "data_size": 63488 00:08:26.786 }, 00:08:26.786 { 00:08:26.786 "name": "BaseBdev2", 00:08:26.786 "uuid": "8535d4af-451a-4856-899d-75872e957241", 00:08:26.786 "is_configured": true, 00:08:26.786 "data_offset": 2048, 00:08:26.786 "data_size": 63488 00:08:26.786 }, 00:08:26.786 { 00:08:26.786 "name": "BaseBdev3", 00:08:26.786 "uuid": "5c60b242-53b8-44ca-9061-d4aff37cd456", 00:08:26.786 "is_configured": true, 00:08:26.786 "data_offset": 2048, 00:08:26.786 "data_size": 63488 00:08:26.786 } 00:08:26.786 ] 00:08:26.786 } 00:08:26.786 } 00:08:26.786 }' 00:08:26.786 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:26.786 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:26.786 BaseBdev2 00:08:26.786 BaseBdev3' 00:08:26.786 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.786 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:26.786 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:26.786 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.786 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:26.786 04:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.786 04:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.786 04:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.786 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:26.786 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:26.786 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:26.786 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:26.786 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.786 04:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.786 04:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.786 04:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.786 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:26.786 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:26.786 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:26.786 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:26.786 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.786 04:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.786 04:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.786 04:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.786 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:26.786 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:26.786 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:26.786 04:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.786 04:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.786 [2024-12-13 04:24:26.784101] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:26.786 [2024-12-13 04:24:26.784127] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:26.786 [2024-12-13 04:24:26.784191] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:27.047 04:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.047 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:27.047 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:27.047 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:27.047 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:27.047 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:27.047 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:27.047 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.047 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:27.047 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:27.047 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.047 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:27.047 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.047 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.047 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.047 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.047 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.047 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.047 04:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.047 04:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.047 04:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.047 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.047 "name": "Existed_Raid", 00:08:27.047 "uuid": "b4d42f7a-8fd5-4dd7-bdfc-9cc9cf675e07", 00:08:27.047 "strip_size_kb": 64, 00:08:27.047 "state": "offline", 00:08:27.047 "raid_level": "raid0", 00:08:27.047 "superblock": true, 00:08:27.047 "num_base_bdevs": 3, 00:08:27.047 "num_base_bdevs_discovered": 2, 00:08:27.047 "num_base_bdevs_operational": 2, 00:08:27.047 "base_bdevs_list": [ 00:08:27.047 { 00:08:27.047 "name": null, 00:08:27.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.047 "is_configured": false, 00:08:27.047 "data_offset": 0, 00:08:27.047 "data_size": 63488 00:08:27.047 }, 00:08:27.047 { 00:08:27.047 "name": "BaseBdev2", 00:08:27.047 "uuid": "8535d4af-451a-4856-899d-75872e957241", 00:08:27.047 "is_configured": true, 00:08:27.047 "data_offset": 2048, 00:08:27.047 "data_size": 63488 00:08:27.047 }, 00:08:27.047 { 00:08:27.047 "name": "BaseBdev3", 00:08:27.047 "uuid": "5c60b242-53b8-44ca-9061-d4aff37cd456", 00:08:27.047 "is_configured": true, 00:08:27.047 "data_offset": 2048, 00:08:27.047 "data_size": 63488 00:08:27.047 } 00:08:27.047 ] 00:08:27.047 }' 00:08:27.047 04:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.047 04:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.307 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:27.307 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:27.307 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:27.307 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.307 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.308 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.308 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.308 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:27.308 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:27.308 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:27.308 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.308 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.308 [2024-12-13 04:24:27.200219] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:27.308 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.308 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:27.308 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:27.308 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.308 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:27.308 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.308 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.308 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.308 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:27.308 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:27.308 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:27.308 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.308 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.308 [2024-12-13 04:24:27.281180] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:27.308 [2024-12-13 04:24:27.281280] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:08:27.308 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.308 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:27.308 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:27.308 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.308 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:27.308 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.308 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.308 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.568 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:27.568 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:27.568 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:27.568 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:27.568 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:27.568 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:27.569 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.569 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.569 BaseBdev2 00:08:27.569 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.569 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:27.569 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:27.569 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:27.569 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:27.569 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:27.569 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:27.569 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:27.569 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.569 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.569 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.569 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:27.569 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.569 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.569 [ 00:08:27.569 { 00:08:27.569 "name": "BaseBdev2", 00:08:27.569 "aliases": [ 00:08:27.569 "d7f82ce1-439f-40fe-a78e-cb15fe1c8229" 00:08:27.569 ], 00:08:27.569 "product_name": "Malloc disk", 00:08:27.569 "block_size": 512, 00:08:27.569 "num_blocks": 65536, 00:08:27.569 "uuid": "d7f82ce1-439f-40fe-a78e-cb15fe1c8229", 00:08:27.569 "assigned_rate_limits": { 00:08:27.569 "rw_ios_per_sec": 0, 00:08:27.569 "rw_mbytes_per_sec": 0, 00:08:27.569 "r_mbytes_per_sec": 0, 00:08:27.569 "w_mbytes_per_sec": 0 00:08:27.569 }, 00:08:27.569 "claimed": false, 00:08:27.569 "zoned": false, 00:08:27.569 "supported_io_types": { 00:08:27.569 "read": true, 00:08:27.569 "write": true, 00:08:27.569 "unmap": true, 00:08:27.569 "flush": true, 00:08:27.569 "reset": true, 00:08:27.569 "nvme_admin": false, 00:08:27.569 "nvme_io": false, 00:08:27.569 "nvme_io_md": false, 00:08:27.569 "write_zeroes": true, 00:08:27.569 "zcopy": true, 00:08:27.569 "get_zone_info": false, 00:08:27.569 "zone_management": false, 00:08:27.569 "zone_append": false, 00:08:27.569 "compare": false, 00:08:27.569 "compare_and_write": false, 00:08:27.569 "abort": true, 00:08:27.569 "seek_hole": false, 00:08:27.569 "seek_data": false, 00:08:27.569 "copy": true, 00:08:27.569 "nvme_iov_md": false 00:08:27.569 }, 00:08:27.569 "memory_domains": [ 00:08:27.569 { 00:08:27.569 "dma_device_id": "system", 00:08:27.569 "dma_device_type": 1 00:08:27.569 }, 00:08:27.569 { 00:08:27.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.569 "dma_device_type": 2 00:08:27.569 } 00:08:27.569 ], 00:08:27.569 "driver_specific": {} 00:08:27.569 } 00:08:27.569 ] 00:08:27.569 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.569 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:27.569 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:27.569 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:27.569 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:27.569 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.569 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.569 BaseBdev3 00:08:27.569 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.569 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:27.569 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:27.569 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:27.569 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:27.569 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:27.569 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:27.569 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:27.569 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.569 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.569 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.569 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:27.569 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.569 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.569 [ 00:08:27.569 { 00:08:27.569 "name": "BaseBdev3", 00:08:27.569 "aliases": [ 00:08:27.569 "4b027038-8c7a-4c98-b1d1-6b4a2a9dadc3" 00:08:27.569 ], 00:08:27.569 "product_name": "Malloc disk", 00:08:27.569 "block_size": 512, 00:08:27.569 "num_blocks": 65536, 00:08:27.569 "uuid": "4b027038-8c7a-4c98-b1d1-6b4a2a9dadc3", 00:08:27.569 "assigned_rate_limits": { 00:08:27.569 "rw_ios_per_sec": 0, 00:08:27.569 "rw_mbytes_per_sec": 0, 00:08:27.569 "r_mbytes_per_sec": 0, 00:08:27.569 "w_mbytes_per_sec": 0 00:08:27.569 }, 00:08:27.569 "claimed": false, 00:08:27.569 "zoned": false, 00:08:27.569 "supported_io_types": { 00:08:27.569 "read": true, 00:08:27.569 "write": true, 00:08:27.569 "unmap": true, 00:08:27.569 "flush": true, 00:08:27.569 "reset": true, 00:08:27.569 "nvme_admin": false, 00:08:27.569 "nvme_io": false, 00:08:27.569 "nvme_io_md": false, 00:08:27.569 "write_zeroes": true, 00:08:27.569 "zcopy": true, 00:08:27.569 "get_zone_info": false, 00:08:27.569 "zone_management": false, 00:08:27.569 "zone_append": false, 00:08:27.569 "compare": false, 00:08:27.569 "compare_and_write": false, 00:08:27.569 "abort": true, 00:08:27.569 "seek_hole": false, 00:08:27.569 "seek_data": false, 00:08:27.569 "copy": true, 00:08:27.569 "nvme_iov_md": false 00:08:27.569 }, 00:08:27.569 "memory_domains": [ 00:08:27.569 { 00:08:27.569 "dma_device_id": "system", 00:08:27.569 "dma_device_type": 1 00:08:27.569 }, 00:08:27.569 { 00:08:27.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.569 "dma_device_type": 2 00:08:27.569 } 00:08:27.569 ], 00:08:27.569 "driver_specific": {} 00:08:27.569 } 00:08:27.569 ] 00:08:27.569 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.569 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:27.569 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:27.569 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:27.569 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:27.569 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.569 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.569 [2024-12-13 04:24:27.462744] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:27.569 [2024-12-13 04:24:27.462831] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:27.569 [2024-12-13 04:24:27.462871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:27.569 [2024-12-13 04:24:27.464923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:27.569 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.569 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:27.569 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.569 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:27.569 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:27.570 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.570 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:27.570 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.570 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.570 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.570 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.570 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.570 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.570 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.570 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.570 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.570 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.570 "name": "Existed_Raid", 00:08:27.570 "uuid": "bf7b7c41-d684-4999-ac8a-3f6759f941c1", 00:08:27.570 "strip_size_kb": 64, 00:08:27.570 "state": "configuring", 00:08:27.570 "raid_level": "raid0", 00:08:27.570 "superblock": true, 00:08:27.570 "num_base_bdevs": 3, 00:08:27.570 "num_base_bdevs_discovered": 2, 00:08:27.570 "num_base_bdevs_operational": 3, 00:08:27.570 "base_bdevs_list": [ 00:08:27.570 { 00:08:27.570 "name": "BaseBdev1", 00:08:27.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.570 "is_configured": false, 00:08:27.570 "data_offset": 0, 00:08:27.570 "data_size": 0 00:08:27.570 }, 00:08:27.570 { 00:08:27.570 "name": "BaseBdev2", 00:08:27.570 "uuid": "d7f82ce1-439f-40fe-a78e-cb15fe1c8229", 00:08:27.570 "is_configured": true, 00:08:27.570 "data_offset": 2048, 00:08:27.570 "data_size": 63488 00:08:27.570 }, 00:08:27.570 { 00:08:27.570 "name": "BaseBdev3", 00:08:27.570 "uuid": "4b027038-8c7a-4c98-b1d1-6b4a2a9dadc3", 00:08:27.570 "is_configured": true, 00:08:27.570 "data_offset": 2048, 00:08:27.570 "data_size": 63488 00:08:27.570 } 00:08:27.570 ] 00:08:27.570 }' 00:08:27.570 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.570 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.830 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:27.830 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.830 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.089 [2024-12-13 04:24:27.846152] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:28.089 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.090 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:28.090 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.090 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.090 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:28.090 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.090 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.090 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.090 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.090 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.090 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.090 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.090 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.090 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.090 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.090 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.090 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.090 "name": "Existed_Raid", 00:08:28.090 "uuid": "bf7b7c41-d684-4999-ac8a-3f6759f941c1", 00:08:28.090 "strip_size_kb": 64, 00:08:28.090 "state": "configuring", 00:08:28.090 "raid_level": "raid0", 00:08:28.090 "superblock": true, 00:08:28.090 "num_base_bdevs": 3, 00:08:28.090 "num_base_bdevs_discovered": 1, 00:08:28.090 "num_base_bdevs_operational": 3, 00:08:28.090 "base_bdevs_list": [ 00:08:28.090 { 00:08:28.090 "name": "BaseBdev1", 00:08:28.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.090 "is_configured": false, 00:08:28.090 "data_offset": 0, 00:08:28.090 "data_size": 0 00:08:28.090 }, 00:08:28.090 { 00:08:28.090 "name": null, 00:08:28.090 "uuid": "d7f82ce1-439f-40fe-a78e-cb15fe1c8229", 00:08:28.090 "is_configured": false, 00:08:28.090 "data_offset": 0, 00:08:28.090 "data_size": 63488 00:08:28.090 }, 00:08:28.090 { 00:08:28.090 "name": "BaseBdev3", 00:08:28.090 "uuid": "4b027038-8c7a-4c98-b1d1-6b4a2a9dadc3", 00:08:28.090 "is_configured": true, 00:08:28.090 "data_offset": 2048, 00:08:28.090 "data_size": 63488 00:08:28.090 } 00:08:28.090 ] 00:08:28.090 }' 00:08:28.090 04:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.090 04:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.348 04:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.348 04:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:28.348 04:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.348 04:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.348 04:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.348 04:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:28.349 04:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:28.349 04:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.349 04:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.349 [2024-12-13 04:24:28.357922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:28.349 BaseBdev1 00:08:28.349 04:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.349 04:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:28.349 04:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:28.349 04:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:28.349 04:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:28.349 04:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:28.349 04:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:28.349 04:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:28.349 04:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.349 04:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.608 04:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.608 04:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:28.608 04:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.608 04:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.608 [ 00:08:28.608 { 00:08:28.608 "name": "BaseBdev1", 00:08:28.608 "aliases": [ 00:08:28.608 "7fca9da8-4c59-4764-924d-dc341192882a" 00:08:28.608 ], 00:08:28.608 "product_name": "Malloc disk", 00:08:28.608 "block_size": 512, 00:08:28.608 "num_blocks": 65536, 00:08:28.608 "uuid": "7fca9da8-4c59-4764-924d-dc341192882a", 00:08:28.608 "assigned_rate_limits": { 00:08:28.608 "rw_ios_per_sec": 0, 00:08:28.608 "rw_mbytes_per_sec": 0, 00:08:28.608 "r_mbytes_per_sec": 0, 00:08:28.608 "w_mbytes_per_sec": 0 00:08:28.608 }, 00:08:28.608 "claimed": true, 00:08:28.608 "claim_type": "exclusive_write", 00:08:28.608 "zoned": false, 00:08:28.608 "supported_io_types": { 00:08:28.608 "read": true, 00:08:28.608 "write": true, 00:08:28.608 "unmap": true, 00:08:28.608 "flush": true, 00:08:28.608 "reset": true, 00:08:28.608 "nvme_admin": false, 00:08:28.608 "nvme_io": false, 00:08:28.608 "nvme_io_md": false, 00:08:28.608 "write_zeroes": true, 00:08:28.608 "zcopy": true, 00:08:28.608 "get_zone_info": false, 00:08:28.608 "zone_management": false, 00:08:28.608 "zone_append": false, 00:08:28.608 "compare": false, 00:08:28.608 "compare_and_write": false, 00:08:28.609 "abort": true, 00:08:28.609 "seek_hole": false, 00:08:28.609 "seek_data": false, 00:08:28.609 "copy": true, 00:08:28.609 "nvme_iov_md": false 00:08:28.609 }, 00:08:28.609 "memory_domains": [ 00:08:28.609 { 00:08:28.609 "dma_device_id": "system", 00:08:28.609 "dma_device_type": 1 00:08:28.609 }, 00:08:28.609 { 00:08:28.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.609 "dma_device_type": 2 00:08:28.609 } 00:08:28.609 ], 00:08:28.609 "driver_specific": {} 00:08:28.609 } 00:08:28.609 ] 00:08:28.609 04:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.609 04:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:28.609 04:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:28.609 04:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.609 04:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.609 04:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:28.609 04:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.609 04:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.609 04:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.609 04:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.609 04:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.609 04:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.609 04:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.609 04:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.609 04:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.609 04:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.609 04:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.609 04:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.609 "name": "Existed_Raid", 00:08:28.609 "uuid": "bf7b7c41-d684-4999-ac8a-3f6759f941c1", 00:08:28.609 "strip_size_kb": 64, 00:08:28.609 "state": "configuring", 00:08:28.609 "raid_level": "raid0", 00:08:28.609 "superblock": true, 00:08:28.609 "num_base_bdevs": 3, 00:08:28.609 "num_base_bdevs_discovered": 2, 00:08:28.609 "num_base_bdevs_operational": 3, 00:08:28.609 "base_bdevs_list": [ 00:08:28.609 { 00:08:28.609 "name": "BaseBdev1", 00:08:28.609 "uuid": "7fca9da8-4c59-4764-924d-dc341192882a", 00:08:28.609 "is_configured": true, 00:08:28.609 "data_offset": 2048, 00:08:28.609 "data_size": 63488 00:08:28.609 }, 00:08:28.609 { 00:08:28.609 "name": null, 00:08:28.609 "uuid": "d7f82ce1-439f-40fe-a78e-cb15fe1c8229", 00:08:28.609 "is_configured": false, 00:08:28.609 "data_offset": 0, 00:08:28.609 "data_size": 63488 00:08:28.609 }, 00:08:28.609 { 00:08:28.609 "name": "BaseBdev3", 00:08:28.609 "uuid": "4b027038-8c7a-4c98-b1d1-6b4a2a9dadc3", 00:08:28.609 "is_configured": true, 00:08:28.609 "data_offset": 2048, 00:08:28.609 "data_size": 63488 00:08:28.609 } 00:08:28.609 ] 00:08:28.609 }' 00:08:28.609 04:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.609 04:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.869 04:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:28.869 04:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.869 04:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.869 04:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.869 04:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.869 04:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:28.869 04:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:28.869 04:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.869 04:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.869 [2024-12-13 04:24:28.813182] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:28.869 04:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.869 04:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:28.869 04:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.869 04:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.869 04:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:28.869 04:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.869 04:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.869 04:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.869 04:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.869 04:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.869 04:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.869 04:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.869 04:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.869 04:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.869 04:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.869 04:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.869 04:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.869 "name": "Existed_Raid", 00:08:28.869 "uuid": "bf7b7c41-d684-4999-ac8a-3f6759f941c1", 00:08:28.869 "strip_size_kb": 64, 00:08:28.869 "state": "configuring", 00:08:28.869 "raid_level": "raid0", 00:08:28.869 "superblock": true, 00:08:28.869 "num_base_bdevs": 3, 00:08:28.869 "num_base_bdevs_discovered": 1, 00:08:28.869 "num_base_bdevs_operational": 3, 00:08:28.869 "base_bdevs_list": [ 00:08:28.869 { 00:08:28.869 "name": "BaseBdev1", 00:08:28.869 "uuid": "7fca9da8-4c59-4764-924d-dc341192882a", 00:08:28.869 "is_configured": true, 00:08:28.869 "data_offset": 2048, 00:08:28.869 "data_size": 63488 00:08:28.869 }, 00:08:28.869 { 00:08:28.869 "name": null, 00:08:28.869 "uuid": "d7f82ce1-439f-40fe-a78e-cb15fe1c8229", 00:08:28.869 "is_configured": false, 00:08:28.869 "data_offset": 0, 00:08:28.869 "data_size": 63488 00:08:28.869 }, 00:08:28.869 { 00:08:28.869 "name": null, 00:08:28.869 "uuid": "4b027038-8c7a-4c98-b1d1-6b4a2a9dadc3", 00:08:28.869 "is_configured": false, 00:08:28.869 "data_offset": 0, 00:08:28.869 "data_size": 63488 00:08:28.869 } 00:08:28.869 ] 00:08:28.869 }' 00:08:28.869 04:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.869 04:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.440 04:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.440 04:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.440 04:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.440 04:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:29.440 04:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.440 04:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:29.440 04:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:29.440 04:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.440 04:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.440 [2024-12-13 04:24:29.292549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:29.440 04:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.440 04:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:29.440 04:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.440 04:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:29.440 04:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:29.440 04:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.440 04:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.440 04:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.440 04:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.440 04:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.440 04:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.440 04:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.440 04:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.440 04:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.440 04:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.440 04:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.440 04:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.440 "name": "Existed_Raid", 00:08:29.440 "uuid": "bf7b7c41-d684-4999-ac8a-3f6759f941c1", 00:08:29.440 "strip_size_kb": 64, 00:08:29.440 "state": "configuring", 00:08:29.440 "raid_level": "raid0", 00:08:29.440 "superblock": true, 00:08:29.440 "num_base_bdevs": 3, 00:08:29.440 "num_base_bdevs_discovered": 2, 00:08:29.440 "num_base_bdevs_operational": 3, 00:08:29.440 "base_bdevs_list": [ 00:08:29.440 { 00:08:29.440 "name": "BaseBdev1", 00:08:29.440 "uuid": "7fca9da8-4c59-4764-924d-dc341192882a", 00:08:29.440 "is_configured": true, 00:08:29.440 "data_offset": 2048, 00:08:29.440 "data_size": 63488 00:08:29.440 }, 00:08:29.440 { 00:08:29.440 "name": null, 00:08:29.440 "uuid": "d7f82ce1-439f-40fe-a78e-cb15fe1c8229", 00:08:29.440 "is_configured": false, 00:08:29.440 "data_offset": 0, 00:08:29.440 "data_size": 63488 00:08:29.440 }, 00:08:29.440 { 00:08:29.440 "name": "BaseBdev3", 00:08:29.440 "uuid": "4b027038-8c7a-4c98-b1d1-6b4a2a9dadc3", 00:08:29.440 "is_configured": true, 00:08:29.440 "data_offset": 2048, 00:08:29.440 "data_size": 63488 00:08:29.440 } 00:08:29.440 ] 00:08:29.440 }' 00:08:29.440 04:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.440 04:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.701 04:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.701 04:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:29.701 04:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.701 04:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.701 04:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.701 04:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:29.701 04:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:29.701 04:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.701 04:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.701 [2024-12-13 04:24:29.668553] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:29.701 04:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.701 04:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:29.701 04:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.701 04:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:29.701 04:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:29.701 04:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.701 04:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.701 04:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.701 04:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.701 04:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.701 04:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.701 04:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.701 04:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.701 04:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.701 04:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.961 04:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.961 04:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.961 "name": "Existed_Raid", 00:08:29.961 "uuid": "bf7b7c41-d684-4999-ac8a-3f6759f941c1", 00:08:29.961 "strip_size_kb": 64, 00:08:29.961 "state": "configuring", 00:08:29.961 "raid_level": "raid0", 00:08:29.961 "superblock": true, 00:08:29.961 "num_base_bdevs": 3, 00:08:29.961 "num_base_bdevs_discovered": 1, 00:08:29.961 "num_base_bdevs_operational": 3, 00:08:29.961 "base_bdevs_list": [ 00:08:29.961 { 00:08:29.961 "name": null, 00:08:29.961 "uuid": "7fca9da8-4c59-4764-924d-dc341192882a", 00:08:29.961 "is_configured": false, 00:08:29.961 "data_offset": 0, 00:08:29.961 "data_size": 63488 00:08:29.961 }, 00:08:29.961 { 00:08:29.961 "name": null, 00:08:29.961 "uuid": "d7f82ce1-439f-40fe-a78e-cb15fe1c8229", 00:08:29.961 "is_configured": false, 00:08:29.961 "data_offset": 0, 00:08:29.961 "data_size": 63488 00:08:29.961 }, 00:08:29.961 { 00:08:29.961 "name": "BaseBdev3", 00:08:29.961 "uuid": "4b027038-8c7a-4c98-b1d1-6b4a2a9dadc3", 00:08:29.961 "is_configured": true, 00:08:29.961 "data_offset": 2048, 00:08:29.961 "data_size": 63488 00:08:29.961 } 00:08:29.961 ] 00:08:29.961 }' 00:08:29.961 04:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.961 04:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.222 04:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.222 04:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:30.222 04:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.222 04:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.222 04:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.222 04:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:30.222 04:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:30.222 04:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.222 04:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.222 [2024-12-13 04:24:30.152398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:30.222 04:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.222 04:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:30.222 04:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.222 04:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:30.222 04:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:30.222 04:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.222 04:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.222 04:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.222 04:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.222 04:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.222 04:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.222 04:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.222 04:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.222 04:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.222 04:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.222 04:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.222 04:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.222 "name": "Existed_Raid", 00:08:30.222 "uuid": "bf7b7c41-d684-4999-ac8a-3f6759f941c1", 00:08:30.222 "strip_size_kb": 64, 00:08:30.222 "state": "configuring", 00:08:30.222 "raid_level": "raid0", 00:08:30.222 "superblock": true, 00:08:30.222 "num_base_bdevs": 3, 00:08:30.222 "num_base_bdevs_discovered": 2, 00:08:30.222 "num_base_bdevs_operational": 3, 00:08:30.222 "base_bdevs_list": [ 00:08:30.222 { 00:08:30.222 "name": null, 00:08:30.222 "uuid": "7fca9da8-4c59-4764-924d-dc341192882a", 00:08:30.222 "is_configured": false, 00:08:30.222 "data_offset": 0, 00:08:30.222 "data_size": 63488 00:08:30.222 }, 00:08:30.222 { 00:08:30.222 "name": "BaseBdev2", 00:08:30.222 "uuid": "d7f82ce1-439f-40fe-a78e-cb15fe1c8229", 00:08:30.222 "is_configured": true, 00:08:30.222 "data_offset": 2048, 00:08:30.222 "data_size": 63488 00:08:30.222 }, 00:08:30.222 { 00:08:30.222 "name": "BaseBdev3", 00:08:30.222 "uuid": "4b027038-8c7a-4c98-b1d1-6b4a2a9dadc3", 00:08:30.222 "is_configured": true, 00:08:30.222 "data_offset": 2048, 00:08:30.222 "data_size": 63488 00:08:30.222 } 00:08:30.222 ] 00:08:30.222 }' 00:08:30.222 04:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.222 04:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.793 04:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.793 04:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:30.793 04:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.793 04:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.793 04:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.793 04:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:30.793 04:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.793 04:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.793 04:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.793 04:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:30.793 04:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.793 04:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7fca9da8-4c59-4764-924d-dc341192882a 00:08:30.793 04:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.793 04:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.793 [2024-12-13 04:24:30.672296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:30.793 [2024-12-13 04:24:30.672644] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:30.793 [2024-12-13 04:24:30.672709] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:30.793 [2024-12-13 04:24:30.672999] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:08:30.793 NewBaseBdev 00:08:30.793 [2024-12-13 04:24:30.673169] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:30.793 [2024-12-13 04:24:30.673181] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:08:30.793 [2024-12-13 04:24:30.673302] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:30.793 04:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.793 04:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:30.793 04:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:30.793 04:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:30.793 04:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:30.793 04:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:30.793 04:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:30.793 04:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:30.793 04:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.793 04:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.793 04:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.793 04:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:30.793 04:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.793 04:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.793 [ 00:08:30.793 { 00:08:30.793 "name": "NewBaseBdev", 00:08:30.793 "aliases": [ 00:08:30.793 "7fca9da8-4c59-4764-924d-dc341192882a" 00:08:30.793 ], 00:08:30.793 "product_name": "Malloc disk", 00:08:30.793 "block_size": 512, 00:08:30.793 "num_blocks": 65536, 00:08:30.793 "uuid": "7fca9da8-4c59-4764-924d-dc341192882a", 00:08:30.793 "assigned_rate_limits": { 00:08:30.793 "rw_ios_per_sec": 0, 00:08:30.793 "rw_mbytes_per_sec": 0, 00:08:30.793 "r_mbytes_per_sec": 0, 00:08:30.793 "w_mbytes_per_sec": 0 00:08:30.793 }, 00:08:30.793 "claimed": true, 00:08:30.793 "claim_type": "exclusive_write", 00:08:30.793 "zoned": false, 00:08:30.793 "supported_io_types": { 00:08:30.793 "read": true, 00:08:30.793 "write": true, 00:08:30.793 "unmap": true, 00:08:30.793 "flush": true, 00:08:30.793 "reset": true, 00:08:30.793 "nvme_admin": false, 00:08:30.793 "nvme_io": false, 00:08:30.793 "nvme_io_md": false, 00:08:30.793 "write_zeroes": true, 00:08:30.793 "zcopy": true, 00:08:30.793 "get_zone_info": false, 00:08:30.793 "zone_management": false, 00:08:30.793 "zone_append": false, 00:08:30.793 "compare": false, 00:08:30.793 "compare_and_write": false, 00:08:30.793 "abort": true, 00:08:30.793 "seek_hole": false, 00:08:30.793 "seek_data": false, 00:08:30.793 "copy": true, 00:08:30.793 "nvme_iov_md": false 00:08:30.793 }, 00:08:30.793 "memory_domains": [ 00:08:30.793 { 00:08:30.793 "dma_device_id": "system", 00:08:30.793 "dma_device_type": 1 00:08:30.793 }, 00:08:30.793 { 00:08:30.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.793 "dma_device_type": 2 00:08:30.793 } 00:08:30.793 ], 00:08:30.793 "driver_specific": {} 00:08:30.793 } 00:08:30.793 ] 00:08:30.793 04:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.793 04:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:30.793 04:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:30.793 04:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.793 04:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:30.793 04:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:30.793 04:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.793 04:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.794 04:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.794 04:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.794 04:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.794 04:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.794 04:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.794 04:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.794 04:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.794 04:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.794 04:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.794 04:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.794 "name": "Existed_Raid", 00:08:30.794 "uuid": "bf7b7c41-d684-4999-ac8a-3f6759f941c1", 00:08:30.794 "strip_size_kb": 64, 00:08:30.794 "state": "online", 00:08:30.794 "raid_level": "raid0", 00:08:30.794 "superblock": true, 00:08:30.794 "num_base_bdevs": 3, 00:08:30.794 "num_base_bdevs_discovered": 3, 00:08:30.794 "num_base_bdevs_operational": 3, 00:08:30.794 "base_bdevs_list": [ 00:08:30.794 { 00:08:30.794 "name": "NewBaseBdev", 00:08:30.794 "uuid": "7fca9da8-4c59-4764-924d-dc341192882a", 00:08:30.794 "is_configured": true, 00:08:30.794 "data_offset": 2048, 00:08:30.794 "data_size": 63488 00:08:30.794 }, 00:08:30.794 { 00:08:30.794 "name": "BaseBdev2", 00:08:30.794 "uuid": "d7f82ce1-439f-40fe-a78e-cb15fe1c8229", 00:08:30.794 "is_configured": true, 00:08:30.794 "data_offset": 2048, 00:08:30.794 "data_size": 63488 00:08:30.794 }, 00:08:30.794 { 00:08:30.794 "name": "BaseBdev3", 00:08:30.794 "uuid": "4b027038-8c7a-4c98-b1d1-6b4a2a9dadc3", 00:08:30.794 "is_configured": true, 00:08:30.794 "data_offset": 2048, 00:08:30.794 "data_size": 63488 00:08:30.794 } 00:08:30.794 ] 00:08:30.794 }' 00:08:30.794 04:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.794 04:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.363 04:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:31.363 04:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:31.364 04:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:31.364 04:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:31.364 04:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:31.364 04:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:31.364 04:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:31.364 04:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.364 04:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.364 04:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:31.364 [2024-12-13 04:24:31.139834] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:31.364 04:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.364 04:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:31.364 "name": "Existed_Raid", 00:08:31.364 "aliases": [ 00:08:31.364 "bf7b7c41-d684-4999-ac8a-3f6759f941c1" 00:08:31.364 ], 00:08:31.364 "product_name": "Raid Volume", 00:08:31.364 "block_size": 512, 00:08:31.364 "num_blocks": 190464, 00:08:31.364 "uuid": "bf7b7c41-d684-4999-ac8a-3f6759f941c1", 00:08:31.364 "assigned_rate_limits": { 00:08:31.364 "rw_ios_per_sec": 0, 00:08:31.364 "rw_mbytes_per_sec": 0, 00:08:31.364 "r_mbytes_per_sec": 0, 00:08:31.364 "w_mbytes_per_sec": 0 00:08:31.364 }, 00:08:31.364 "claimed": false, 00:08:31.364 "zoned": false, 00:08:31.364 "supported_io_types": { 00:08:31.364 "read": true, 00:08:31.364 "write": true, 00:08:31.364 "unmap": true, 00:08:31.364 "flush": true, 00:08:31.364 "reset": true, 00:08:31.364 "nvme_admin": false, 00:08:31.364 "nvme_io": false, 00:08:31.364 "nvme_io_md": false, 00:08:31.364 "write_zeroes": true, 00:08:31.364 "zcopy": false, 00:08:31.364 "get_zone_info": false, 00:08:31.364 "zone_management": false, 00:08:31.364 "zone_append": false, 00:08:31.364 "compare": false, 00:08:31.364 "compare_and_write": false, 00:08:31.364 "abort": false, 00:08:31.364 "seek_hole": false, 00:08:31.364 "seek_data": false, 00:08:31.364 "copy": false, 00:08:31.364 "nvme_iov_md": false 00:08:31.364 }, 00:08:31.364 "memory_domains": [ 00:08:31.364 { 00:08:31.364 "dma_device_id": "system", 00:08:31.364 "dma_device_type": 1 00:08:31.364 }, 00:08:31.364 { 00:08:31.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.364 "dma_device_type": 2 00:08:31.364 }, 00:08:31.364 { 00:08:31.364 "dma_device_id": "system", 00:08:31.364 "dma_device_type": 1 00:08:31.364 }, 00:08:31.364 { 00:08:31.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.364 "dma_device_type": 2 00:08:31.364 }, 00:08:31.364 { 00:08:31.364 "dma_device_id": "system", 00:08:31.364 "dma_device_type": 1 00:08:31.364 }, 00:08:31.364 { 00:08:31.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.364 "dma_device_type": 2 00:08:31.364 } 00:08:31.364 ], 00:08:31.364 "driver_specific": { 00:08:31.364 "raid": { 00:08:31.364 "uuid": "bf7b7c41-d684-4999-ac8a-3f6759f941c1", 00:08:31.364 "strip_size_kb": 64, 00:08:31.364 "state": "online", 00:08:31.364 "raid_level": "raid0", 00:08:31.364 "superblock": true, 00:08:31.364 "num_base_bdevs": 3, 00:08:31.364 "num_base_bdevs_discovered": 3, 00:08:31.364 "num_base_bdevs_operational": 3, 00:08:31.364 "base_bdevs_list": [ 00:08:31.364 { 00:08:31.364 "name": "NewBaseBdev", 00:08:31.364 "uuid": "7fca9da8-4c59-4764-924d-dc341192882a", 00:08:31.364 "is_configured": true, 00:08:31.364 "data_offset": 2048, 00:08:31.364 "data_size": 63488 00:08:31.364 }, 00:08:31.364 { 00:08:31.364 "name": "BaseBdev2", 00:08:31.364 "uuid": "d7f82ce1-439f-40fe-a78e-cb15fe1c8229", 00:08:31.364 "is_configured": true, 00:08:31.364 "data_offset": 2048, 00:08:31.364 "data_size": 63488 00:08:31.364 }, 00:08:31.364 { 00:08:31.364 "name": "BaseBdev3", 00:08:31.364 "uuid": "4b027038-8c7a-4c98-b1d1-6b4a2a9dadc3", 00:08:31.364 "is_configured": true, 00:08:31.364 "data_offset": 2048, 00:08:31.364 "data_size": 63488 00:08:31.364 } 00:08:31.364 ] 00:08:31.364 } 00:08:31.364 } 00:08:31.364 }' 00:08:31.364 04:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:31.364 04:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:31.364 BaseBdev2 00:08:31.364 BaseBdev3' 00:08:31.364 04:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.364 04:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:31.364 04:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:31.364 04:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:31.364 04:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.364 04:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.364 04:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.364 04:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.364 04:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:31.364 04:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:31.364 04:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:31.364 04:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.364 04:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:31.364 04:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.364 04:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.364 04:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.364 04:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:31.364 04:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:31.364 04:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:31.364 04:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:31.364 04:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.364 04:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.364 04:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.624 04:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.624 04:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:31.624 04:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:31.624 04:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:31.624 04:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.624 04:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.624 [2024-12-13 04:24:31.419045] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:31.624 [2024-12-13 04:24:31.419114] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:31.624 [2024-12-13 04:24:31.419210] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:31.624 [2024-12-13 04:24:31.419284] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:31.624 [2024-12-13 04:24:31.419320] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:08:31.624 04:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.624 04:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 77308 00:08:31.624 04:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 77308 ']' 00:08:31.624 04:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 77308 00:08:31.624 04:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:31.624 04:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:31.624 04:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77308 00:08:31.624 killing process with pid 77308 00:08:31.624 04:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:31.624 04:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:31.624 04:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77308' 00:08:31.624 04:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 77308 00:08:31.624 [2024-12-13 04:24:31.471369] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:31.624 04:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 77308 00:08:31.624 [2024-12-13 04:24:31.531507] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:31.883 04:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:31.883 00:08:31.883 real 0m8.573s 00:08:31.883 user 0m14.263s 00:08:31.883 sys 0m1.906s 00:08:31.883 04:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.883 04:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.883 ************************************ 00:08:31.883 END TEST raid_state_function_test_sb 00:08:31.883 ************************************ 00:08:32.149 04:24:31 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:08:32.149 04:24:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:32.149 04:24:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.149 04:24:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:32.149 ************************************ 00:08:32.149 START TEST raid_superblock_test 00:08:32.149 ************************************ 00:08:32.149 04:24:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:08:32.149 04:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:32.149 04:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:32.149 04:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:32.149 04:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:32.149 04:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:32.149 04:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:32.149 04:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:32.149 04:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:32.149 04:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:32.149 04:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:32.149 04:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:32.149 04:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:32.149 04:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:32.149 04:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:32.149 04:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:32.149 04:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:32.149 04:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=77906 00:08:32.149 04:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:32.149 04:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 77906 00:08:32.149 04:24:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 77906 ']' 00:08:32.149 04:24:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.149 04:24:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:32.149 04:24:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.149 04:24:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:32.149 04:24:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.149 [2024-12-13 04:24:32.025915] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:32.149 [2024-12-13 04:24:32.026111] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77906 ] 00:08:32.417 [2024-12-13 04:24:32.184278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.417 [2024-12-13 04:24:32.223595] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.417 [2024-12-13 04:24:32.301536] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:32.417 [2024-12-13 04:24:32.301587] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:32.986 04:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:32.986 04:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:32.986 04:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:32.986 04:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:32.986 04:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:32.986 04:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:32.986 04:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:32.986 04:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:32.986 04:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:32.986 04:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:32.986 04:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:32.986 04:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.986 04:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.986 malloc1 00:08:32.986 04:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.986 04:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:32.986 04:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.986 04:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.986 [2024-12-13 04:24:32.883945] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:32.986 [2024-12-13 04:24:32.884083] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.986 [2024-12-13 04:24:32.884131] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:08:32.986 [2024-12-13 04:24:32.884170] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.987 [2024-12-13 04:24:32.886722] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.987 [2024-12-13 04:24:32.886826] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:32.987 pt1 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.987 malloc2 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.987 [2024-12-13 04:24:32.922765] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:32.987 [2024-12-13 04:24:32.922889] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.987 [2024-12-13 04:24:32.922927] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:32.987 [2024-12-13 04:24:32.922965] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.987 [2024-12-13 04:24:32.925449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.987 [2024-12-13 04:24:32.925531] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:32.987 pt2 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.987 malloc3 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.987 [2024-12-13 04:24:32.957491] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:32.987 [2024-12-13 04:24:32.957618] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.987 [2024-12-13 04:24:32.957660] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:32.987 [2024-12-13 04:24:32.957711] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.987 [2024-12-13 04:24:32.960181] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.987 [2024-12-13 04:24:32.960250] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:32.987 pt3 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.987 [2024-12-13 04:24:32.969544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:32.987 [2024-12-13 04:24:32.971787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:32.987 [2024-12-13 04:24:32.971888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:32.987 [2024-12-13 04:24:32.972069] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:08:32.987 [2024-12-13 04:24:32.972118] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:32.987 [2024-12-13 04:24:32.972457] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:32.987 [2024-12-13 04:24:32.972653] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:08:32.987 [2024-12-13 04:24:32.972699] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:08:32.987 [2024-12-13 04:24:32.972858] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:32.987 04:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.246 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.246 "name": "raid_bdev1", 00:08:33.246 "uuid": "d15e2f31-da31-48f3-bf11-3b1d3f804434", 00:08:33.246 "strip_size_kb": 64, 00:08:33.246 "state": "online", 00:08:33.246 "raid_level": "raid0", 00:08:33.246 "superblock": true, 00:08:33.246 "num_base_bdevs": 3, 00:08:33.246 "num_base_bdevs_discovered": 3, 00:08:33.246 "num_base_bdevs_operational": 3, 00:08:33.246 "base_bdevs_list": [ 00:08:33.246 { 00:08:33.246 "name": "pt1", 00:08:33.246 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:33.246 "is_configured": true, 00:08:33.246 "data_offset": 2048, 00:08:33.246 "data_size": 63488 00:08:33.246 }, 00:08:33.246 { 00:08:33.246 "name": "pt2", 00:08:33.246 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:33.246 "is_configured": true, 00:08:33.246 "data_offset": 2048, 00:08:33.246 "data_size": 63488 00:08:33.246 }, 00:08:33.246 { 00:08:33.246 "name": "pt3", 00:08:33.246 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:33.246 "is_configured": true, 00:08:33.246 "data_offset": 2048, 00:08:33.246 "data_size": 63488 00:08:33.246 } 00:08:33.246 ] 00:08:33.246 }' 00:08:33.246 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.246 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.505 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:33.505 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:33.505 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:33.505 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:33.505 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:33.505 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:33.506 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:33.506 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.506 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:33.506 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.506 [2024-12-13 04:24:33.421012] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:33.506 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.506 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:33.506 "name": "raid_bdev1", 00:08:33.506 "aliases": [ 00:08:33.506 "d15e2f31-da31-48f3-bf11-3b1d3f804434" 00:08:33.506 ], 00:08:33.506 "product_name": "Raid Volume", 00:08:33.506 "block_size": 512, 00:08:33.506 "num_blocks": 190464, 00:08:33.506 "uuid": "d15e2f31-da31-48f3-bf11-3b1d3f804434", 00:08:33.506 "assigned_rate_limits": { 00:08:33.506 "rw_ios_per_sec": 0, 00:08:33.506 "rw_mbytes_per_sec": 0, 00:08:33.506 "r_mbytes_per_sec": 0, 00:08:33.506 "w_mbytes_per_sec": 0 00:08:33.506 }, 00:08:33.506 "claimed": false, 00:08:33.506 "zoned": false, 00:08:33.506 "supported_io_types": { 00:08:33.506 "read": true, 00:08:33.506 "write": true, 00:08:33.506 "unmap": true, 00:08:33.506 "flush": true, 00:08:33.506 "reset": true, 00:08:33.506 "nvme_admin": false, 00:08:33.506 "nvme_io": false, 00:08:33.506 "nvme_io_md": false, 00:08:33.506 "write_zeroes": true, 00:08:33.506 "zcopy": false, 00:08:33.506 "get_zone_info": false, 00:08:33.506 "zone_management": false, 00:08:33.506 "zone_append": false, 00:08:33.506 "compare": false, 00:08:33.506 "compare_and_write": false, 00:08:33.506 "abort": false, 00:08:33.506 "seek_hole": false, 00:08:33.506 "seek_data": false, 00:08:33.506 "copy": false, 00:08:33.506 "nvme_iov_md": false 00:08:33.506 }, 00:08:33.506 "memory_domains": [ 00:08:33.506 { 00:08:33.506 "dma_device_id": "system", 00:08:33.506 "dma_device_type": 1 00:08:33.506 }, 00:08:33.506 { 00:08:33.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.506 "dma_device_type": 2 00:08:33.506 }, 00:08:33.506 { 00:08:33.506 "dma_device_id": "system", 00:08:33.506 "dma_device_type": 1 00:08:33.506 }, 00:08:33.506 { 00:08:33.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.506 "dma_device_type": 2 00:08:33.506 }, 00:08:33.506 { 00:08:33.506 "dma_device_id": "system", 00:08:33.506 "dma_device_type": 1 00:08:33.506 }, 00:08:33.506 { 00:08:33.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.506 "dma_device_type": 2 00:08:33.506 } 00:08:33.506 ], 00:08:33.506 "driver_specific": { 00:08:33.506 "raid": { 00:08:33.506 "uuid": "d15e2f31-da31-48f3-bf11-3b1d3f804434", 00:08:33.506 "strip_size_kb": 64, 00:08:33.506 "state": "online", 00:08:33.506 "raid_level": "raid0", 00:08:33.506 "superblock": true, 00:08:33.506 "num_base_bdevs": 3, 00:08:33.506 "num_base_bdevs_discovered": 3, 00:08:33.506 "num_base_bdevs_operational": 3, 00:08:33.506 "base_bdevs_list": [ 00:08:33.506 { 00:08:33.506 "name": "pt1", 00:08:33.506 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:33.506 "is_configured": true, 00:08:33.506 "data_offset": 2048, 00:08:33.506 "data_size": 63488 00:08:33.506 }, 00:08:33.506 { 00:08:33.506 "name": "pt2", 00:08:33.506 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:33.506 "is_configured": true, 00:08:33.506 "data_offset": 2048, 00:08:33.506 "data_size": 63488 00:08:33.506 }, 00:08:33.506 { 00:08:33.506 "name": "pt3", 00:08:33.506 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:33.506 "is_configured": true, 00:08:33.506 "data_offset": 2048, 00:08:33.506 "data_size": 63488 00:08:33.506 } 00:08:33.506 ] 00:08:33.506 } 00:08:33.506 } 00:08:33.506 }' 00:08:33.506 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:33.506 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:33.506 pt2 00:08:33.506 pt3' 00:08:33.506 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.766 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:33.766 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:33.766 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.766 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:33.766 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.766 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.766 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.766 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:33.766 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:33.766 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:33.766 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:33.766 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.766 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.766 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.766 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.766 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:33.766 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:33.766 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:33.766 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:33.766 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.766 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.766 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.766 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.766 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:33.766 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:33.766 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:33.766 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.766 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.766 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:33.766 [2024-12-13 04:24:33.716644] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:33.766 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.766 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d15e2f31-da31-48f3-bf11-3b1d3f804434 00:08:33.766 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d15e2f31-da31-48f3-bf11-3b1d3f804434 ']' 00:08:33.766 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:33.766 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.766 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.766 [2024-12-13 04:24:33.760291] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:33.766 [2024-12-13 04:24:33.760324] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:33.766 [2024-12-13 04:24:33.760437] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:33.766 [2024-12-13 04:24:33.760544] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:33.766 [2024-12-13 04:24:33.760558] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:08:33.766 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.766 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:33.766 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.766 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.766 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.027 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.027 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:34.027 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:34.027 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:34.027 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:34.027 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.027 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.027 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.027 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:34.027 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:34.027 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.027 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.027 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.027 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:34.027 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:34.027 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.027 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.027 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.027 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:34.027 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.027 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.027 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:34.027 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.027 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:34.027 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:34.027 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:34.027 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:34.027 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:34.027 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.027 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:34.027 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.027 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:34.027 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.027 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.027 [2024-12-13 04:24:33.908068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:34.027 [2024-12-13 04:24:33.910389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:34.027 [2024-12-13 04:24:33.910497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:34.027 [2024-12-13 04:24:33.910594] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:34.027 [2024-12-13 04:24:33.910677] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:34.027 [2024-12-13 04:24:33.910751] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:34.027 [2024-12-13 04:24:33.910800] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:34.027 [2024-12-13 04:24:33.910835] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:08:34.027 request: 00:08:34.027 { 00:08:34.027 "name": "raid_bdev1", 00:08:34.027 "raid_level": "raid0", 00:08:34.027 "base_bdevs": [ 00:08:34.027 "malloc1", 00:08:34.027 "malloc2", 00:08:34.027 "malloc3" 00:08:34.027 ], 00:08:34.027 "strip_size_kb": 64, 00:08:34.027 "superblock": false, 00:08:34.027 "method": "bdev_raid_create", 00:08:34.027 "req_id": 1 00:08:34.027 } 00:08:34.027 Got JSON-RPC error response 00:08:34.027 response: 00:08:34.027 { 00:08:34.027 "code": -17, 00:08:34.027 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:34.027 } 00:08:34.027 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:34.027 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:34.027 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:34.027 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:34.027 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:34.027 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.027 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:34.027 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.027 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.027 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.027 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:34.027 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:34.027 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:34.027 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.027 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.027 [2024-12-13 04:24:33.975878] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:34.027 [2024-12-13 04:24:33.975965] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:34.027 [2024-12-13 04:24:33.975997] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:34.027 [2024-12-13 04:24:33.976025] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:34.027 [2024-12-13 04:24:33.978516] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:34.028 [2024-12-13 04:24:33.978588] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:34.028 [2024-12-13 04:24:33.978678] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:34.028 [2024-12-13 04:24:33.978764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:34.028 pt1 00:08:34.028 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.028 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:34.028 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:34.028 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.028 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:34.028 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.028 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.028 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.028 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.028 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.028 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.028 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.028 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.028 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.028 04:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:34.028 04:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.028 04:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.028 "name": "raid_bdev1", 00:08:34.028 "uuid": "d15e2f31-da31-48f3-bf11-3b1d3f804434", 00:08:34.028 "strip_size_kb": 64, 00:08:34.028 "state": "configuring", 00:08:34.028 "raid_level": "raid0", 00:08:34.028 "superblock": true, 00:08:34.028 "num_base_bdevs": 3, 00:08:34.028 "num_base_bdevs_discovered": 1, 00:08:34.028 "num_base_bdevs_operational": 3, 00:08:34.028 "base_bdevs_list": [ 00:08:34.028 { 00:08:34.028 "name": "pt1", 00:08:34.028 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:34.028 "is_configured": true, 00:08:34.028 "data_offset": 2048, 00:08:34.028 "data_size": 63488 00:08:34.028 }, 00:08:34.028 { 00:08:34.028 "name": null, 00:08:34.028 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:34.028 "is_configured": false, 00:08:34.028 "data_offset": 2048, 00:08:34.028 "data_size": 63488 00:08:34.028 }, 00:08:34.028 { 00:08:34.028 "name": null, 00:08:34.028 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:34.028 "is_configured": false, 00:08:34.028 "data_offset": 2048, 00:08:34.028 "data_size": 63488 00:08:34.028 } 00:08:34.028 ] 00:08:34.028 }' 00:08:34.028 04:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.028 04:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.597 04:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:34.597 04:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:34.597 04:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.598 04:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.598 [2024-12-13 04:24:34.439157] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:34.598 [2024-12-13 04:24:34.439219] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:34.598 [2024-12-13 04:24:34.439240] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:34.598 [2024-12-13 04:24:34.439253] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:34.598 [2024-12-13 04:24:34.439706] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:34.598 [2024-12-13 04:24:34.439729] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:34.598 [2024-12-13 04:24:34.439798] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:34.598 [2024-12-13 04:24:34.439821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:34.598 pt2 00:08:34.598 04:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.598 04:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:34.598 04:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.598 04:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.598 [2024-12-13 04:24:34.451154] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:34.598 04:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.598 04:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:34.598 04:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:34.598 04:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.598 04:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:34.598 04:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.598 04:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.598 04:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.598 04:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.598 04:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.598 04:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.598 04:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:34.598 04:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.598 04:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.598 04:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.598 04:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.598 04:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.598 "name": "raid_bdev1", 00:08:34.598 "uuid": "d15e2f31-da31-48f3-bf11-3b1d3f804434", 00:08:34.598 "strip_size_kb": 64, 00:08:34.598 "state": "configuring", 00:08:34.598 "raid_level": "raid0", 00:08:34.598 "superblock": true, 00:08:34.598 "num_base_bdevs": 3, 00:08:34.598 "num_base_bdevs_discovered": 1, 00:08:34.598 "num_base_bdevs_operational": 3, 00:08:34.598 "base_bdevs_list": [ 00:08:34.598 { 00:08:34.598 "name": "pt1", 00:08:34.598 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:34.598 "is_configured": true, 00:08:34.598 "data_offset": 2048, 00:08:34.598 "data_size": 63488 00:08:34.598 }, 00:08:34.598 { 00:08:34.598 "name": null, 00:08:34.598 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:34.598 "is_configured": false, 00:08:34.598 "data_offset": 0, 00:08:34.598 "data_size": 63488 00:08:34.598 }, 00:08:34.598 { 00:08:34.598 "name": null, 00:08:34.598 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:34.598 "is_configured": false, 00:08:34.598 "data_offset": 2048, 00:08:34.598 "data_size": 63488 00:08:34.598 } 00:08:34.598 ] 00:08:34.598 }' 00:08:34.598 04:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.598 04:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.167 04:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:35.167 04:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:35.167 04:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:35.167 04:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.167 04:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.167 [2024-12-13 04:24:34.910348] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:35.167 [2024-12-13 04:24:34.910454] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:35.167 [2024-12-13 04:24:34.910491] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:35.167 [2024-12-13 04:24:34.910518] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:35.167 [2024-12-13 04:24:34.910941] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:35.167 [2024-12-13 04:24:34.910997] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:35.167 [2024-12-13 04:24:34.911100] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:35.167 [2024-12-13 04:24:34.911147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:35.167 pt2 00:08:35.167 04:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.167 04:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:35.167 04:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:35.167 04:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:35.167 04:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.167 04:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.167 [2024-12-13 04:24:34.922324] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:35.167 [2024-12-13 04:24:34.922417] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:35.167 [2024-12-13 04:24:34.922454] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:35.167 [2024-12-13 04:24:34.922492] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:35.167 [2024-12-13 04:24:34.922838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:35.167 [2024-12-13 04:24:34.922859] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:35.167 [2024-12-13 04:24:34.922913] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:35.167 [2024-12-13 04:24:34.922929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:35.167 [2024-12-13 04:24:34.923023] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:35.167 [2024-12-13 04:24:34.923031] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:35.167 [2024-12-13 04:24:34.923273] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:08:35.167 [2024-12-13 04:24:34.923391] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:35.167 [2024-12-13 04:24:34.923404] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:08:35.167 [2024-12-13 04:24:34.923527] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:35.167 pt3 00:08:35.167 04:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.167 04:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:35.167 04:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:35.167 04:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:35.167 04:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:35.167 04:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:35.167 04:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:35.167 04:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.167 04:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.167 04:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.167 04:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.167 04:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.167 04:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.167 04:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.167 04:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.167 04:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.167 04:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:35.167 04:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.167 04:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.167 "name": "raid_bdev1", 00:08:35.167 "uuid": "d15e2f31-da31-48f3-bf11-3b1d3f804434", 00:08:35.167 "strip_size_kb": 64, 00:08:35.167 "state": "online", 00:08:35.167 "raid_level": "raid0", 00:08:35.167 "superblock": true, 00:08:35.167 "num_base_bdevs": 3, 00:08:35.167 "num_base_bdevs_discovered": 3, 00:08:35.167 "num_base_bdevs_operational": 3, 00:08:35.167 "base_bdevs_list": [ 00:08:35.167 { 00:08:35.167 "name": "pt1", 00:08:35.167 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:35.167 "is_configured": true, 00:08:35.167 "data_offset": 2048, 00:08:35.167 "data_size": 63488 00:08:35.167 }, 00:08:35.167 { 00:08:35.167 "name": "pt2", 00:08:35.167 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:35.167 "is_configured": true, 00:08:35.167 "data_offset": 2048, 00:08:35.167 "data_size": 63488 00:08:35.167 }, 00:08:35.167 { 00:08:35.167 "name": "pt3", 00:08:35.167 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:35.167 "is_configured": true, 00:08:35.167 "data_offset": 2048, 00:08:35.168 "data_size": 63488 00:08:35.168 } 00:08:35.168 ] 00:08:35.168 }' 00:08:35.168 04:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.168 04:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.426 04:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:35.426 04:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:35.426 04:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:35.426 04:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:35.426 04:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:35.426 04:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:35.426 04:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:35.426 04:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.426 04:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.426 04:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:35.426 [2024-12-13 04:24:35.349907] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:35.426 04:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.426 04:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:35.426 "name": "raid_bdev1", 00:08:35.426 "aliases": [ 00:08:35.426 "d15e2f31-da31-48f3-bf11-3b1d3f804434" 00:08:35.426 ], 00:08:35.426 "product_name": "Raid Volume", 00:08:35.426 "block_size": 512, 00:08:35.426 "num_blocks": 190464, 00:08:35.426 "uuid": "d15e2f31-da31-48f3-bf11-3b1d3f804434", 00:08:35.426 "assigned_rate_limits": { 00:08:35.426 "rw_ios_per_sec": 0, 00:08:35.426 "rw_mbytes_per_sec": 0, 00:08:35.426 "r_mbytes_per_sec": 0, 00:08:35.426 "w_mbytes_per_sec": 0 00:08:35.426 }, 00:08:35.426 "claimed": false, 00:08:35.426 "zoned": false, 00:08:35.426 "supported_io_types": { 00:08:35.426 "read": true, 00:08:35.426 "write": true, 00:08:35.426 "unmap": true, 00:08:35.426 "flush": true, 00:08:35.426 "reset": true, 00:08:35.426 "nvme_admin": false, 00:08:35.426 "nvme_io": false, 00:08:35.426 "nvme_io_md": false, 00:08:35.426 "write_zeroes": true, 00:08:35.426 "zcopy": false, 00:08:35.426 "get_zone_info": false, 00:08:35.426 "zone_management": false, 00:08:35.426 "zone_append": false, 00:08:35.426 "compare": false, 00:08:35.426 "compare_and_write": false, 00:08:35.426 "abort": false, 00:08:35.426 "seek_hole": false, 00:08:35.426 "seek_data": false, 00:08:35.426 "copy": false, 00:08:35.426 "nvme_iov_md": false 00:08:35.426 }, 00:08:35.426 "memory_domains": [ 00:08:35.426 { 00:08:35.426 "dma_device_id": "system", 00:08:35.427 "dma_device_type": 1 00:08:35.427 }, 00:08:35.427 { 00:08:35.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.427 "dma_device_type": 2 00:08:35.427 }, 00:08:35.427 { 00:08:35.427 "dma_device_id": "system", 00:08:35.427 "dma_device_type": 1 00:08:35.427 }, 00:08:35.427 { 00:08:35.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.427 "dma_device_type": 2 00:08:35.427 }, 00:08:35.427 { 00:08:35.427 "dma_device_id": "system", 00:08:35.427 "dma_device_type": 1 00:08:35.427 }, 00:08:35.427 { 00:08:35.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.427 "dma_device_type": 2 00:08:35.427 } 00:08:35.427 ], 00:08:35.427 "driver_specific": { 00:08:35.427 "raid": { 00:08:35.427 "uuid": "d15e2f31-da31-48f3-bf11-3b1d3f804434", 00:08:35.427 "strip_size_kb": 64, 00:08:35.427 "state": "online", 00:08:35.427 "raid_level": "raid0", 00:08:35.427 "superblock": true, 00:08:35.427 "num_base_bdevs": 3, 00:08:35.427 "num_base_bdevs_discovered": 3, 00:08:35.427 "num_base_bdevs_operational": 3, 00:08:35.427 "base_bdevs_list": [ 00:08:35.427 { 00:08:35.427 "name": "pt1", 00:08:35.427 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:35.427 "is_configured": true, 00:08:35.427 "data_offset": 2048, 00:08:35.427 "data_size": 63488 00:08:35.427 }, 00:08:35.427 { 00:08:35.427 "name": "pt2", 00:08:35.427 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:35.427 "is_configured": true, 00:08:35.427 "data_offset": 2048, 00:08:35.427 "data_size": 63488 00:08:35.427 }, 00:08:35.427 { 00:08:35.427 "name": "pt3", 00:08:35.427 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:35.427 "is_configured": true, 00:08:35.427 "data_offset": 2048, 00:08:35.427 "data_size": 63488 00:08:35.427 } 00:08:35.427 ] 00:08:35.427 } 00:08:35.427 } 00:08:35.427 }' 00:08:35.427 04:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:35.427 04:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:35.427 pt2 00:08:35.427 pt3' 00:08:35.427 04:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.686 04:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:35.686 04:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.686 04:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.686 04:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:35.686 04:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.686 04:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.686 04:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.686 04:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.686 04:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.686 04:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.686 04:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:35.686 04:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.686 04:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.686 04:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.686 04:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.686 04:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.686 04:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.686 04:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.686 04:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:35.686 04:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.686 04:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.686 04:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.686 04:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.686 04:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.686 04:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.686 04:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:35.686 04:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:35.686 04:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.686 04:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.687 [2024-12-13 04:24:35.625340] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:35.687 04:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.687 04:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d15e2f31-da31-48f3-bf11-3b1d3f804434 '!=' d15e2f31-da31-48f3-bf11-3b1d3f804434 ']' 00:08:35.687 04:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:35.687 04:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:35.687 04:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:35.687 04:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 77906 00:08:35.687 04:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 77906 ']' 00:08:35.687 04:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 77906 00:08:35.687 04:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:35.687 04:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:35.687 04:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77906 00:08:35.687 killing process with pid 77906 00:08:35.687 04:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:35.687 04:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:35.687 04:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77906' 00:08:35.687 04:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 77906 00:08:35.687 [2024-12-13 04:24:35.694146] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:35.687 [2024-12-13 04:24:35.694229] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:35.687 [2024-12-13 04:24:35.694301] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:35.687 [2024-12-13 04:24:35.694310] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:08:35.687 04:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 77906 00:08:35.946 [2024-12-13 04:24:35.755064] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:36.205 ************************************ 00:08:36.205 END TEST raid_superblock_test 00:08:36.205 ************************************ 00:08:36.205 04:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:36.205 00:08:36.205 real 0m4.146s 00:08:36.205 user 0m6.375s 00:08:36.205 sys 0m0.939s 00:08:36.205 04:24:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.205 04:24:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.205 04:24:36 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:08:36.205 04:24:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:36.205 04:24:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.205 04:24:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:36.205 ************************************ 00:08:36.205 START TEST raid_read_error_test 00:08:36.205 ************************************ 00:08:36.205 04:24:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:08:36.205 04:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:36.205 04:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:36.205 04:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:36.205 04:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:36.205 04:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:36.205 04:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:36.205 04:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:36.205 04:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:36.205 04:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:36.205 04:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:36.205 04:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:36.205 04:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:36.205 04:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:36.205 04:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:36.205 04:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:36.205 04:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:36.205 04:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:36.205 04:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:36.205 04:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:36.205 04:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:36.205 04:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:36.205 04:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:36.205 04:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:36.205 04:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:36.205 04:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:36.205 04:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.zlXy00WZhU 00:08:36.205 04:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78148 00:08:36.205 04:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:36.205 04:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78148 00:08:36.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.205 04:24:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 78148 ']' 00:08:36.205 04:24:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.205 04:24:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:36.205 04:24:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.205 04:24:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:36.205 04:24:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.465 [2024-12-13 04:24:36.267902] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:36.465 [2024-12-13 04:24:36.268110] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78148 ] 00:08:36.465 [2024-12-13 04:24:36.424489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.465 [2024-12-13 04:24:36.464538] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.724 [2024-12-13 04:24:36.540774] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:36.724 [2024-12-13 04:24:36.540820] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:37.293 04:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:37.293 04:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:37.293 04:24:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:37.293 04:24:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:37.293 04:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.293 04:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.293 BaseBdev1_malloc 00:08:37.293 04:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.293 04:24:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:37.293 04:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.293 04:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.293 true 00:08:37.293 04:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.293 04:24:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:37.293 04:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.293 04:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.293 [2024-12-13 04:24:37.125555] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:37.293 [2024-12-13 04:24:37.125612] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:37.293 [2024-12-13 04:24:37.125634] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:08:37.293 [2024-12-13 04:24:37.125642] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:37.293 [2024-12-13 04:24:37.128099] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:37.293 [2024-12-13 04:24:37.128137] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:37.293 BaseBdev1 00:08:37.293 04:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.293 04:24:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:37.293 04:24:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:37.293 04:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.293 04:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.293 BaseBdev2_malloc 00:08:37.293 04:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.293 04:24:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:37.293 04:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.293 04:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.293 true 00:08:37.293 04:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.293 04:24:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:37.293 04:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.293 04:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.293 [2024-12-13 04:24:37.172262] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:37.293 [2024-12-13 04:24:37.172314] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:37.293 [2024-12-13 04:24:37.172336] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:37.293 [2024-12-13 04:24:37.172353] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:37.293 [2024-12-13 04:24:37.174807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:37.293 [2024-12-13 04:24:37.174843] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:37.293 BaseBdev2 00:08:37.293 04:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.293 04:24:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:37.293 04:24:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:37.293 04:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.293 04:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.293 BaseBdev3_malloc 00:08:37.293 04:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.293 04:24:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:37.293 04:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.293 04:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.293 true 00:08:37.294 04:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.294 04:24:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:37.294 04:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.294 04:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.294 [2024-12-13 04:24:37.218861] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:37.294 [2024-12-13 04:24:37.218907] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:37.294 [2024-12-13 04:24:37.218929] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:08:37.294 [2024-12-13 04:24:37.218938] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:37.294 [2024-12-13 04:24:37.221358] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:37.294 [2024-12-13 04:24:37.221392] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:37.294 BaseBdev3 00:08:37.294 04:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.294 04:24:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:37.294 04:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.294 04:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.294 [2024-12-13 04:24:37.230899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:37.294 [2024-12-13 04:24:37.233067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:37.294 [2024-12-13 04:24:37.233144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:37.294 [2024-12-13 04:24:37.233340] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:37.294 [2024-12-13 04:24:37.233355] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:37.294 [2024-12-13 04:24:37.233689] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002bb0 00:08:37.294 [2024-12-13 04:24:37.233848] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:37.294 [2024-12-13 04:24:37.233864] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:08:37.294 [2024-12-13 04:24:37.234017] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:37.294 04:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.294 04:24:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:37.294 04:24:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:37.294 04:24:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:37.294 04:24:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:37.294 04:24:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.294 04:24:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.294 04:24:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.294 04:24:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.294 04:24:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.294 04:24:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.294 04:24:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.294 04:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.294 04:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.294 04:24:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:37.294 04:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.294 04:24:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.294 "name": "raid_bdev1", 00:08:37.294 "uuid": "e77f4a0e-ffcf-446c-b1e5-eb7da67018c1", 00:08:37.294 "strip_size_kb": 64, 00:08:37.294 "state": "online", 00:08:37.294 "raid_level": "raid0", 00:08:37.294 "superblock": true, 00:08:37.294 "num_base_bdevs": 3, 00:08:37.294 "num_base_bdevs_discovered": 3, 00:08:37.294 "num_base_bdevs_operational": 3, 00:08:37.294 "base_bdevs_list": [ 00:08:37.294 { 00:08:37.294 "name": "BaseBdev1", 00:08:37.294 "uuid": "c3b58b42-d386-59f6-a559-1449a531b0b5", 00:08:37.294 "is_configured": true, 00:08:37.294 "data_offset": 2048, 00:08:37.294 "data_size": 63488 00:08:37.294 }, 00:08:37.294 { 00:08:37.294 "name": "BaseBdev2", 00:08:37.294 "uuid": "2ad6f7b0-00c9-5553-afa5-aaa50948650e", 00:08:37.294 "is_configured": true, 00:08:37.294 "data_offset": 2048, 00:08:37.294 "data_size": 63488 00:08:37.294 }, 00:08:37.294 { 00:08:37.294 "name": "BaseBdev3", 00:08:37.294 "uuid": "0632b8b1-e380-5f26-ade2-1fb5b418e08a", 00:08:37.294 "is_configured": true, 00:08:37.294 "data_offset": 2048, 00:08:37.294 "data_size": 63488 00:08:37.294 } 00:08:37.294 ] 00:08:37.294 }' 00:08:37.294 04:24:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.294 04:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.863 04:24:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:37.863 04:24:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:37.863 [2024-12-13 04:24:37.750562] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002d50 00:08:38.802 04:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:38.802 04:24:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.802 04:24:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.802 04:24:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.802 04:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:38.802 04:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:38.802 04:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:38.802 04:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:38.802 04:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:38.802 04:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:38.802 04:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:38.802 04:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.802 04:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.802 04:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.802 04:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.802 04:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.802 04:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.802 04:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.802 04:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:38.802 04:24:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.802 04:24:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.802 04:24:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.802 04:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.802 "name": "raid_bdev1", 00:08:38.802 "uuid": "e77f4a0e-ffcf-446c-b1e5-eb7da67018c1", 00:08:38.802 "strip_size_kb": 64, 00:08:38.802 "state": "online", 00:08:38.802 "raid_level": "raid0", 00:08:38.802 "superblock": true, 00:08:38.802 "num_base_bdevs": 3, 00:08:38.802 "num_base_bdevs_discovered": 3, 00:08:38.802 "num_base_bdevs_operational": 3, 00:08:38.802 "base_bdevs_list": [ 00:08:38.802 { 00:08:38.802 "name": "BaseBdev1", 00:08:38.802 "uuid": "c3b58b42-d386-59f6-a559-1449a531b0b5", 00:08:38.802 "is_configured": true, 00:08:38.802 "data_offset": 2048, 00:08:38.802 "data_size": 63488 00:08:38.802 }, 00:08:38.802 { 00:08:38.802 "name": "BaseBdev2", 00:08:38.802 "uuid": "2ad6f7b0-00c9-5553-afa5-aaa50948650e", 00:08:38.802 "is_configured": true, 00:08:38.802 "data_offset": 2048, 00:08:38.802 "data_size": 63488 00:08:38.802 }, 00:08:38.802 { 00:08:38.802 "name": "BaseBdev3", 00:08:38.802 "uuid": "0632b8b1-e380-5f26-ade2-1fb5b418e08a", 00:08:38.802 "is_configured": true, 00:08:38.802 "data_offset": 2048, 00:08:38.802 "data_size": 63488 00:08:38.802 } 00:08:38.802 ] 00:08:38.802 }' 00:08:38.802 04:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.802 04:24:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.370 04:24:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:39.370 04:24:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.370 04:24:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.370 [2024-12-13 04:24:39.135139] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:39.370 [2024-12-13 04:24:39.135240] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:39.370 [2024-12-13 04:24:39.137943] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:39.370 [2024-12-13 04:24:39.138062] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:39.370 [2024-12-13 04:24:39.138123] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:39.370 [2024-12-13 04:24:39.138180] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:08:39.370 { 00:08:39.370 "results": [ 00:08:39.370 { 00:08:39.370 "job": "raid_bdev1", 00:08:39.370 "core_mask": "0x1", 00:08:39.370 "workload": "randrw", 00:08:39.370 "percentage": 50, 00:08:39.370 "status": "finished", 00:08:39.370 "queue_depth": 1, 00:08:39.370 "io_size": 131072, 00:08:39.370 "runtime": 1.38534, 00:08:39.370 "iops": 14746.560411162603, 00:08:39.370 "mibps": 1843.3200513953254, 00:08:39.370 "io_failed": 1, 00:08:39.370 "io_timeout": 0, 00:08:39.370 "avg_latency_us": 94.89496044647075, 00:08:39.370 "min_latency_us": 25.152838427947597, 00:08:39.370 "max_latency_us": 1345.0620087336245 00:08:39.370 } 00:08:39.370 ], 00:08:39.370 "core_count": 1 00:08:39.370 } 00:08:39.370 04:24:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.370 04:24:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78148 00:08:39.370 04:24:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 78148 ']' 00:08:39.370 04:24:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 78148 00:08:39.370 04:24:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:39.370 04:24:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:39.370 04:24:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78148 00:08:39.370 04:24:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:39.370 04:24:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:39.370 04:24:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78148' 00:08:39.370 killing process with pid 78148 00:08:39.370 04:24:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 78148 00:08:39.370 [2024-12-13 04:24:39.173974] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:39.370 04:24:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 78148 00:08:39.370 [2024-12-13 04:24:39.221391] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:39.630 04:24:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.zlXy00WZhU 00:08:39.630 04:24:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:39.630 04:24:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:39.630 04:24:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:39.630 04:24:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:39.630 04:24:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:39.630 04:24:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:39.630 04:24:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:39.630 00:08:39.630 real 0m3.393s 00:08:39.630 user 0m4.154s 00:08:39.630 sys 0m0.627s 00:08:39.630 ************************************ 00:08:39.630 END TEST raid_read_error_test 00:08:39.630 ************************************ 00:08:39.630 04:24:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:39.630 04:24:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.630 04:24:39 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:08:39.630 04:24:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:39.630 04:24:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.630 04:24:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:39.630 ************************************ 00:08:39.630 START TEST raid_write_error_test 00:08:39.630 ************************************ 00:08:39.630 04:24:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:08:39.630 04:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:39.630 04:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:39.630 04:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:39.630 04:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:39.630 04:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:39.630 04:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:39.630 04:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:39.630 04:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:39.630 04:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:39.630 04:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:39.630 04:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:39.630 04:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:39.630 04:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:39.630 04:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:39.630 04:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:39.630 04:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:39.630 04:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:39.630 04:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:39.630 04:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:39.630 04:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:39.630 04:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:39.630 04:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:39.630 04:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:39.630 04:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:39.631 04:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:39.890 04:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.L5gE90XAZJ 00:08:39.890 04:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78283 00:08:39.890 04:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:39.890 04:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78283 00:08:39.890 04:24:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 78283 ']' 00:08:39.890 04:24:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.890 04:24:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:39.890 04:24:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.890 04:24:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:39.890 04:24:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.890 [2024-12-13 04:24:39.737687] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:39.890 [2024-12-13 04:24:39.737903] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78283 ] 00:08:39.890 [2024-12-13 04:24:39.895040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.150 [2024-12-13 04:24:39.933504] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.150 [2024-12-13 04:24:40.008939] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:40.150 [2024-12-13 04:24:40.009069] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:40.719 04:24:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:40.719 04:24:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:40.719 04:24:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:40.719 04:24:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:40.719 04:24:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.719 04:24:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.719 BaseBdev1_malloc 00:08:40.719 04:24:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.719 04:24:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:40.719 04:24:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.719 04:24:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.719 true 00:08:40.719 04:24:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.719 04:24:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:40.719 04:24:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.719 04:24:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.719 [2024-12-13 04:24:40.589814] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:40.719 [2024-12-13 04:24:40.589933] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:40.720 [2024-12-13 04:24:40.589965] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:08:40.720 [2024-12-13 04:24:40.589975] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:40.720 [2024-12-13 04:24:40.592413] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:40.720 [2024-12-13 04:24:40.592459] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:40.720 BaseBdev1 00:08:40.720 04:24:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.720 04:24:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:40.720 04:24:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:40.720 04:24:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.720 04:24:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.720 BaseBdev2_malloc 00:08:40.720 04:24:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.720 04:24:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:40.720 04:24:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.720 04:24:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.720 true 00:08:40.720 04:24:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.720 04:24:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:40.720 04:24:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.720 04:24:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.720 [2024-12-13 04:24:40.636188] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:40.720 [2024-12-13 04:24:40.636257] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:40.720 [2024-12-13 04:24:40.636279] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:40.720 [2024-12-13 04:24:40.636297] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:40.720 [2024-12-13 04:24:40.638731] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:40.720 [2024-12-13 04:24:40.638778] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:40.720 BaseBdev2 00:08:40.720 04:24:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.720 04:24:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:40.720 04:24:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:40.720 04:24:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.720 04:24:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.720 BaseBdev3_malloc 00:08:40.720 04:24:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.720 04:24:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:40.720 04:24:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.720 04:24:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.720 true 00:08:40.720 04:24:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.720 04:24:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:40.720 04:24:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.720 04:24:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.720 [2024-12-13 04:24:40.682560] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:40.720 [2024-12-13 04:24:40.682697] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:40.720 [2024-12-13 04:24:40.682726] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:08:40.720 [2024-12-13 04:24:40.682735] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:40.720 [2024-12-13 04:24:40.685105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:40.720 [2024-12-13 04:24:40.685141] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:40.720 BaseBdev3 00:08:40.720 04:24:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.720 04:24:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:40.720 04:24:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.720 04:24:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.720 [2024-12-13 04:24:40.694622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:40.720 [2024-12-13 04:24:40.696698] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:40.720 [2024-12-13 04:24:40.696773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:40.720 [2024-12-13 04:24:40.696955] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:40.720 [2024-12-13 04:24:40.696970] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:40.720 [2024-12-13 04:24:40.697225] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002bb0 00:08:40.720 [2024-12-13 04:24:40.697384] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:40.720 [2024-12-13 04:24:40.697395] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:08:40.720 [2024-12-13 04:24:40.697586] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:40.720 04:24:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.720 04:24:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:40.720 04:24:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:40.720 04:24:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:40.720 04:24:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:40.720 04:24:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.720 04:24:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.720 04:24:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.720 04:24:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.720 04:24:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.720 04:24:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.720 04:24:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.720 04:24:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.720 04:24:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:40.720 04:24:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.720 04:24:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.980 04:24:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.980 "name": "raid_bdev1", 00:08:40.980 "uuid": "f3465831-b32b-4435-ae74-b12675afb3b4", 00:08:40.980 "strip_size_kb": 64, 00:08:40.980 "state": "online", 00:08:40.980 "raid_level": "raid0", 00:08:40.980 "superblock": true, 00:08:40.980 "num_base_bdevs": 3, 00:08:40.980 "num_base_bdevs_discovered": 3, 00:08:40.980 "num_base_bdevs_operational": 3, 00:08:40.980 "base_bdevs_list": [ 00:08:40.980 { 00:08:40.980 "name": "BaseBdev1", 00:08:40.980 "uuid": "c4bf2ee1-c496-5e9a-8d8b-9b56938bfd39", 00:08:40.980 "is_configured": true, 00:08:40.980 "data_offset": 2048, 00:08:40.980 "data_size": 63488 00:08:40.980 }, 00:08:40.980 { 00:08:40.980 "name": "BaseBdev2", 00:08:40.980 "uuid": "d361d5e1-f0c2-5324-bc76-9b78cddfd476", 00:08:40.980 "is_configured": true, 00:08:40.980 "data_offset": 2048, 00:08:40.980 "data_size": 63488 00:08:40.980 }, 00:08:40.980 { 00:08:40.980 "name": "BaseBdev3", 00:08:40.980 "uuid": "8c0dfb08-81ea-5adc-be46-9351dac7931b", 00:08:40.980 "is_configured": true, 00:08:40.980 "data_offset": 2048, 00:08:40.980 "data_size": 63488 00:08:40.980 } 00:08:40.980 ] 00:08:40.980 }' 00:08:40.980 04:24:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.980 04:24:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.239 04:24:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:41.239 04:24:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:41.239 [2024-12-13 04:24:41.238193] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002d50 00:08:42.178 04:24:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:42.178 04:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.178 04:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.178 04:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.178 04:24:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:42.178 04:24:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:42.178 04:24:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:42.178 04:24:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:42.178 04:24:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:42.178 04:24:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:42.178 04:24:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:42.178 04:24:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.178 04:24:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.178 04:24:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.178 04:24:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.178 04:24:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.178 04:24:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.178 04:24:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.178 04:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.178 04:24:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:42.178 04:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.178 04:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.438 04:24:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.438 "name": "raid_bdev1", 00:08:42.438 "uuid": "f3465831-b32b-4435-ae74-b12675afb3b4", 00:08:42.438 "strip_size_kb": 64, 00:08:42.438 "state": "online", 00:08:42.438 "raid_level": "raid0", 00:08:42.438 "superblock": true, 00:08:42.438 "num_base_bdevs": 3, 00:08:42.438 "num_base_bdevs_discovered": 3, 00:08:42.438 "num_base_bdevs_operational": 3, 00:08:42.438 "base_bdevs_list": [ 00:08:42.438 { 00:08:42.438 "name": "BaseBdev1", 00:08:42.438 "uuid": "c4bf2ee1-c496-5e9a-8d8b-9b56938bfd39", 00:08:42.438 "is_configured": true, 00:08:42.438 "data_offset": 2048, 00:08:42.438 "data_size": 63488 00:08:42.438 }, 00:08:42.438 { 00:08:42.438 "name": "BaseBdev2", 00:08:42.438 "uuid": "d361d5e1-f0c2-5324-bc76-9b78cddfd476", 00:08:42.438 "is_configured": true, 00:08:42.438 "data_offset": 2048, 00:08:42.438 "data_size": 63488 00:08:42.438 }, 00:08:42.438 { 00:08:42.438 "name": "BaseBdev3", 00:08:42.438 "uuid": "8c0dfb08-81ea-5adc-be46-9351dac7931b", 00:08:42.438 "is_configured": true, 00:08:42.438 "data_offset": 2048, 00:08:42.438 "data_size": 63488 00:08:42.438 } 00:08:42.438 ] 00:08:42.438 }' 00:08:42.438 04:24:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.438 04:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.698 04:24:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:42.698 04:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.698 04:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.698 [2024-12-13 04:24:42.611167] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:42.698 [2024-12-13 04:24:42.611316] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:42.698 [2024-12-13 04:24:42.614098] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:42.698 [2024-12-13 04:24:42.614153] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:42.698 [2024-12-13 04:24:42.614192] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:42.698 [2024-12-13 04:24:42.614204] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:08:42.698 04:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.698 { 00:08:42.698 "results": [ 00:08:42.698 { 00:08:42.698 "job": "raid_bdev1", 00:08:42.698 "core_mask": "0x1", 00:08:42.698 "workload": "randrw", 00:08:42.698 "percentage": 50, 00:08:42.698 "status": "finished", 00:08:42.698 "queue_depth": 1, 00:08:42.698 "io_size": 131072, 00:08:42.698 "runtime": 1.373666, 00:08:42.698 "iops": 14394.32875240415, 00:08:42.698 "mibps": 1799.2910940505187, 00:08:42.698 "io_failed": 1, 00:08:42.698 "io_timeout": 0, 00:08:42.698 "avg_latency_us": 97.20891241332737, 00:08:42.698 "min_latency_us": 18.78078602620087, 00:08:42.698 "max_latency_us": 1337.907423580786 00:08:42.698 } 00:08:42.698 ], 00:08:42.698 "core_count": 1 00:08:42.698 } 00:08:42.698 04:24:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78283 00:08:42.698 04:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 78283 ']' 00:08:42.698 04:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 78283 00:08:42.698 04:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:42.698 04:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:42.698 04:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78283 00:08:42.698 killing process with pid 78283 00:08:42.698 04:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:42.698 04:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:42.698 04:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78283' 00:08:42.698 04:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 78283 00:08:42.698 [2024-12-13 04:24:42.666567] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:42.698 04:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 78283 00:08:42.698 [2024-12-13 04:24:42.712331] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:43.268 04:24:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.L5gE90XAZJ 00:08:43.268 04:24:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:43.268 04:24:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:43.268 04:24:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:08:43.268 04:24:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:43.268 04:24:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:43.268 04:24:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:43.268 04:24:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:08:43.268 ************************************ 00:08:43.268 END TEST raid_write_error_test 00:08:43.268 ************************************ 00:08:43.268 00:08:43.268 real 0m3.410s 00:08:43.268 user 0m4.204s 00:08:43.268 sys 0m0.635s 00:08:43.268 04:24:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:43.268 04:24:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.268 04:24:43 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:43.268 04:24:43 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:08:43.268 04:24:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:43.268 04:24:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:43.268 04:24:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:43.268 ************************************ 00:08:43.268 START TEST raid_state_function_test 00:08:43.268 ************************************ 00:08:43.268 04:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:08:43.268 04:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:43.268 04:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:43.269 04:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:43.269 04:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:43.269 04:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:43.269 04:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:43.269 04:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:43.269 04:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:43.269 04:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:43.269 04:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:43.269 04:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:43.269 04:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:43.269 04:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:43.269 04:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:43.269 04:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:43.269 04:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:43.269 04:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:43.269 04:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:43.269 04:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:43.269 04:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:43.269 04:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:43.269 04:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:43.269 04:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:43.269 04:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:43.269 04:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:43.269 04:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:43.269 04:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=78410 00:08:43.269 04:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:43.269 04:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78410' 00:08:43.269 Process raid pid: 78410 00:08:43.269 04:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 78410 00:08:43.269 04:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 78410 ']' 00:08:43.269 04:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.269 04:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:43.269 04:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.269 04:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:43.269 04:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.269 [2024-12-13 04:24:43.207878] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:43.269 [2024-12-13 04:24:43.208085] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:43.528 [2024-12-13 04:24:43.362997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.528 [2024-12-13 04:24:43.401020] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.528 [2024-12-13 04:24:43.476213] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:43.528 [2024-12-13 04:24:43.476347] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:44.097 04:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:44.097 04:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:44.097 04:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:44.097 04:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.097 04:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.097 [2024-12-13 04:24:44.033908] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:44.097 [2024-12-13 04:24:44.033975] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:44.097 [2024-12-13 04:24:44.033986] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:44.097 [2024-12-13 04:24:44.033996] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:44.097 [2024-12-13 04:24:44.034001] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:44.097 [2024-12-13 04:24:44.034016] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:44.097 04:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.097 04:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:44.097 04:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.097 04:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.097 04:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:44.097 04:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.097 04:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.097 04:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.097 04:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.097 04:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.097 04:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.097 04:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.097 04:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.097 04:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.097 04:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.097 04:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.097 04:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.097 "name": "Existed_Raid", 00:08:44.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.097 "strip_size_kb": 64, 00:08:44.097 "state": "configuring", 00:08:44.097 "raid_level": "concat", 00:08:44.097 "superblock": false, 00:08:44.097 "num_base_bdevs": 3, 00:08:44.097 "num_base_bdevs_discovered": 0, 00:08:44.097 "num_base_bdevs_operational": 3, 00:08:44.097 "base_bdevs_list": [ 00:08:44.097 { 00:08:44.097 "name": "BaseBdev1", 00:08:44.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.097 "is_configured": false, 00:08:44.097 "data_offset": 0, 00:08:44.097 "data_size": 0 00:08:44.097 }, 00:08:44.097 { 00:08:44.097 "name": "BaseBdev2", 00:08:44.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.097 "is_configured": false, 00:08:44.097 "data_offset": 0, 00:08:44.097 "data_size": 0 00:08:44.097 }, 00:08:44.097 { 00:08:44.097 "name": "BaseBdev3", 00:08:44.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.097 "is_configured": false, 00:08:44.097 "data_offset": 0, 00:08:44.097 "data_size": 0 00:08:44.098 } 00:08:44.098 ] 00:08:44.098 }' 00:08:44.098 04:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.098 04:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.688 04:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:44.688 04:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.688 04:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.688 [2024-12-13 04:24:44.465267] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:44.688 [2024-12-13 04:24:44.465399] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:08:44.688 04:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.688 04:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:44.688 04:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.688 04:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.688 [2024-12-13 04:24:44.477100] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:44.688 [2024-12-13 04:24:44.477195] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:44.688 [2024-12-13 04:24:44.477224] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:44.688 [2024-12-13 04:24:44.477248] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:44.688 [2024-12-13 04:24:44.477266] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:44.688 [2024-12-13 04:24:44.477287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:44.688 04:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.688 04:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:44.688 04:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.688 04:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.688 [2024-12-13 04:24:44.504477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:44.688 BaseBdev1 00:08:44.688 04:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.688 04:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:44.688 04:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:44.688 04:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:44.688 04:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:44.688 04:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:44.688 04:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:44.688 04:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:44.688 04:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.688 04:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.688 04:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.688 04:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:44.688 04:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.688 04:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.688 [ 00:08:44.688 { 00:08:44.688 "name": "BaseBdev1", 00:08:44.688 "aliases": [ 00:08:44.688 "bfa43859-4908-408a-8fdb-c27f39ba8f45" 00:08:44.688 ], 00:08:44.688 "product_name": "Malloc disk", 00:08:44.688 "block_size": 512, 00:08:44.688 "num_blocks": 65536, 00:08:44.689 "uuid": "bfa43859-4908-408a-8fdb-c27f39ba8f45", 00:08:44.689 "assigned_rate_limits": { 00:08:44.689 "rw_ios_per_sec": 0, 00:08:44.689 "rw_mbytes_per_sec": 0, 00:08:44.689 "r_mbytes_per_sec": 0, 00:08:44.689 "w_mbytes_per_sec": 0 00:08:44.689 }, 00:08:44.689 "claimed": true, 00:08:44.689 "claim_type": "exclusive_write", 00:08:44.689 "zoned": false, 00:08:44.689 "supported_io_types": { 00:08:44.689 "read": true, 00:08:44.689 "write": true, 00:08:44.689 "unmap": true, 00:08:44.689 "flush": true, 00:08:44.689 "reset": true, 00:08:44.689 "nvme_admin": false, 00:08:44.689 "nvme_io": false, 00:08:44.689 "nvme_io_md": false, 00:08:44.689 "write_zeroes": true, 00:08:44.689 "zcopy": true, 00:08:44.689 "get_zone_info": false, 00:08:44.689 "zone_management": false, 00:08:44.689 "zone_append": false, 00:08:44.689 "compare": false, 00:08:44.689 "compare_and_write": false, 00:08:44.689 "abort": true, 00:08:44.689 "seek_hole": false, 00:08:44.689 "seek_data": false, 00:08:44.689 "copy": true, 00:08:44.689 "nvme_iov_md": false 00:08:44.689 }, 00:08:44.689 "memory_domains": [ 00:08:44.689 { 00:08:44.689 "dma_device_id": "system", 00:08:44.689 "dma_device_type": 1 00:08:44.689 }, 00:08:44.689 { 00:08:44.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.689 "dma_device_type": 2 00:08:44.689 } 00:08:44.689 ], 00:08:44.689 "driver_specific": {} 00:08:44.689 } 00:08:44.689 ] 00:08:44.689 04:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.689 04:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:44.689 04:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:44.689 04:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.689 04:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.689 04:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:44.689 04:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.689 04:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.689 04:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.689 04:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.689 04:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.689 04:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.689 04:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.689 04:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.689 04:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.689 04:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.689 04:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.689 04:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.689 "name": "Existed_Raid", 00:08:44.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.689 "strip_size_kb": 64, 00:08:44.689 "state": "configuring", 00:08:44.689 "raid_level": "concat", 00:08:44.689 "superblock": false, 00:08:44.689 "num_base_bdevs": 3, 00:08:44.689 "num_base_bdevs_discovered": 1, 00:08:44.689 "num_base_bdevs_operational": 3, 00:08:44.689 "base_bdevs_list": [ 00:08:44.689 { 00:08:44.689 "name": "BaseBdev1", 00:08:44.689 "uuid": "bfa43859-4908-408a-8fdb-c27f39ba8f45", 00:08:44.689 "is_configured": true, 00:08:44.689 "data_offset": 0, 00:08:44.689 "data_size": 65536 00:08:44.689 }, 00:08:44.689 { 00:08:44.689 "name": "BaseBdev2", 00:08:44.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.689 "is_configured": false, 00:08:44.689 "data_offset": 0, 00:08:44.689 "data_size": 0 00:08:44.689 }, 00:08:44.689 { 00:08:44.689 "name": "BaseBdev3", 00:08:44.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.689 "is_configured": false, 00:08:44.689 "data_offset": 0, 00:08:44.689 "data_size": 0 00:08:44.689 } 00:08:44.689 ] 00:08:44.689 }' 00:08:44.689 04:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.689 04:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.258 04:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:45.258 04:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.258 04:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.258 [2024-12-13 04:24:45.003596] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:45.258 [2024-12-13 04:24:45.003655] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:08:45.258 04:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.258 04:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:45.258 04:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.258 04:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.258 [2024-12-13 04:24:45.015612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:45.258 [2024-12-13 04:24:45.017822] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:45.258 [2024-12-13 04:24:45.017864] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:45.258 [2024-12-13 04:24:45.017873] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:45.258 [2024-12-13 04:24:45.017883] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:45.258 04:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.258 04:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:45.258 04:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:45.258 04:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:45.258 04:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.258 04:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.258 04:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:45.258 04:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.258 04:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.258 04:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.258 04:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.258 04:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.258 04:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.258 04:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.258 04:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.258 04:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.258 04:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.258 04:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.258 04:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.258 "name": "Existed_Raid", 00:08:45.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.258 "strip_size_kb": 64, 00:08:45.258 "state": "configuring", 00:08:45.258 "raid_level": "concat", 00:08:45.258 "superblock": false, 00:08:45.258 "num_base_bdevs": 3, 00:08:45.258 "num_base_bdevs_discovered": 1, 00:08:45.258 "num_base_bdevs_operational": 3, 00:08:45.258 "base_bdevs_list": [ 00:08:45.258 { 00:08:45.258 "name": "BaseBdev1", 00:08:45.258 "uuid": "bfa43859-4908-408a-8fdb-c27f39ba8f45", 00:08:45.258 "is_configured": true, 00:08:45.258 "data_offset": 0, 00:08:45.258 "data_size": 65536 00:08:45.258 }, 00:08:45.258 { 00:08:45.258 "name": "BaseBdev2", 00:08:45.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.258 "is_configured": false, 00:08:45.258 "data_offset": 0, 00:08:45.258 "data_size": 0 00:08:45.258 }, 00:08:45.258 { 00:08:45.258 "name": "BaseBdev3", 00:08:45.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.258 "is_configured": false, 00:08:45.258 "data_offset": 0, 00:08:45.258 "data_size": 0 00:08:45.258 } 00:08:45.258 ] 00:08:45.258 }' 00:08:45.258 04:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.258 04:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.518 04:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:45.519 04:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.519 04:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.519 [2024-12-13 04:24:45.455518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:45.519 BaseBdev2 00:08:45.519 04:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.519 04:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:45.519 04:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:45.519 04:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:45.519 04:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:45.519 04:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:45.519 04:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:45.519 04:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:45.519 04:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.519 04:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.519 04:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.519 04:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:45.519 04:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.519 04:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.519 [ 00:08:45.519 { 00:08:45.519 "name": "BaseBdev2", 00:08:45.519 "aliases": [ 00:08:45.519 "b093b9ef-8dfd-485d-a307-9e8cc3efa089" 00:08:45.519 ], 00:08:45.519 "product_name": "Malloc disk", 00:08:45.519 "block_size": 512, 00:08:45.519 "num_blocks": 65536, 00:08:45.519 "uuid": "b093b9ef-8dfd-485d-a307-9e8cc3efa089", 00:08:45.519 "assigned_rate_limits": { 00:08:45.519 "rw_ios_per_sec": 0, 00:08:45.519 "rw_mbytes_per_sec": 0, 00:08:45.519 "r_mbytes_per_sec": 0, 00:08:45.519 "w_mbytes_per_sec": 0 00:08:45.519 }, 00:08:45.519 "claimed": true, 00:08:45.519 "claim_type": "exclusive_write", 00:08:45.519 "zoned": false, 00:08:45.519 "supported_io_types": { 00:08:45.519 "read": true, 00:08:45.519 "write": true, 00:08:45.519 "unmap": true, 00:08:45.519 "flush": true, 00:08:45.519 "reset": true, 00:08:45.519 "nvme_admin": false, 00:08:45.519 "nvme_io": false, 00:08:45.519 "nvme_io_md": false, 00:08:45.519 "write_zeroes": true, 00:08:45.519 "zcopy": true, 00:08:45.519 "get_zone_info": false, 00:08:45.519 "zone_management": false, 00:08:45.519 "zone_append": false, 00:08:45.519 "compare": false, 00:08:45.519 "compare_and_write": false, 00:08:45.519 "abort": true, 00:08:45.519 "seek_hole": false, 00:08:45.519 "seek_data": false, 00:08:45.519 "copy": true, 00:08:45.519 "nvme_iov_md": false 00:08:45.519 }, 00:08:45.519 "memory_domains": [ 00:08:45.519 { 00:08:45.519 "dma_device_id": "system", 00:08:45.519 "dma_device_type": 1 00:08:45.519 }, 00:08:45.519 { 00:08:45.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.519 "dma_device_type": 2 00:08:45.519 } 00:08:45.519 ], 00:08:45.519 "driver_specific": {} 00:08:45.519 } 00:08:45.519 ] 00:08:45.519 04:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.519 04:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:45.519 04:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:45.519 04:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:45.519 04:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:45.519 04:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.519 04:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.519 04:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:45.519 04:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.519 04:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.519 04:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.519 04:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.519 04:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.519 04:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.519 04:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.519 04:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.519 04:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.519 04:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.519 04:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.778 04:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.778 "name": "Existed_Raid", 00:08:45.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.778 "strip_size_kb": 64, 00:08:45.778 "state": "configuring", 00:08:45.778 "raid_level": "concat", 00:08:45.778 "superblock": false, 00:08:45.778 "num_base_bdevs": 3, 00:08:45.778 "num_base_bdevs_discovered": 2, 00:08:45.778 "num_base_bdevs_operational": 3, 00:08:45.778 "base_bdevs_list": [ 00:08:45.778 { 00:08:45.778 "name": "BaseBdev1", 00:08:45.778 "uuid": "bfa43859-4908-408a-8fdb-c27f39ba8f45", 00:08:45.778 "is_configured": true, 00:08:45.778 "data_offset": 0, 00:08:45.778 "data_size": 65536 00:08:45.778 }, 00:08:45.778 { 00:08:45.778 "name": "BaseBdev2", 00:08:45.778 "uuid": "b093b9ef-8dfd-485d-a307-9e8cc3efa089", 00:08:45.778 "is_configured": true, 00:08:45.778 "data_offset": 0, 00:08:45.778 "data_size": 65536 00:08:45.778 }, 00:08:45.778 { 00:08:45.778 "name": "BaseBdev3", 00:08:45.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.778 "is_configured": false, 00:08:45.778 "data_offset": 0, 00:08:45.778 "data_size": 0 00:08:45.778 } 00:08:45.778 ] 00:08:45.778 }' 00:08:45.778 04:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.778 04:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.039 04:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:46.039 04:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.039 04:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.039 [2024-12-13 04:24:45.975342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:46.039 [2024-12-13 04:24:45.975699] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:46.039 [2024-12-13 04:24:45.975762] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:46.039 [2024-12-13 04:24:45.976853] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:46.039 [2024-12-13 04:24:45.977367] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:46.039 [2024-12-13 04:24:45.977429] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:08:46.039 BaseBdev3 00:08:46.039 [2024-12-13 04:24:45.978183] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:46.039 04:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.039 04:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:46.039 04:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:46.039 04:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:46.039 04:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:46.039 04:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:46.039 04:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:46.039 04:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:46.039 04:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.039 04:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.039 04:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.039 04:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:46.039 04:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.039 04:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.039 [ 00:08:46.039 { 00:08:46.039 "name": "BaseBdev3", 00:08:46.039 "aliases": [ 00:08:46.039 "2f1339b7-7165-4a0b-a201-d99c0bf97ea0" 00:08:46.039 ], 00:08:46.039 "product_name": "Malloc disk", 00:08:46.039 "block_size": 512, 00:08:46.039 "num_blocks": 65536, 00:08:46.039 "uuid": "2f1339b7-7165-4a0b-a201-d99c0bf97ea0", 00:08:46.039 "assigned_rate_limits": { 00:08:46.039 "rw_ios_per_sec": 0, 00:08:46.039 "rw_mbytes_per_sec": 0, 00:08:46.039 "r_mbytes_per_sec": 0, 00:08:46.039 "w_mbytes_per_sec": 0 00:08:46.039 }, 00:08:46.039 "claimed": true, 00:08:46.039 "claim_type": "exclusive_write", 00:08:46.039 "zoned": false, 00:08:46.039 "supported_io_types": { 00:08:46.039 "read": true, 00:08:46.039 "write": true, 00:08:46.039 "unmap": true, 00:08:46.039 "flush": true, 00:08:46.039 "reset": true, 00:08:46.039 "nvme_admin": false, 00:08:46.039 "nvme_io": false, 00:08:46.039 "nvme_io_md": false, 00:08:46.039 "write_zeroes": true, 00:08:46.039 "zcopy": true, 00:08:46.039 "get_zone_info": false, 00:08:46.039 "zone_management": false, 00:08:46.039 "zone_append": false, 00:08:46.039 "compare": false, 00:08:46.039 "compare_and_write": false, 00:08:46.039 "abort": true, 00:08:46.039 "seek_hole": false, 00:08:46.039 "seek_data": false, 00:08:46.039 "copy": true, 00:08:46.039 "nvme_iov_md": false 00:08:46.039 }, 00:08:46.039 "memory_domains": [ 00:08:46.039 { 00:08:46.039 "dma_device_id": "system", 00:08:46.039 "dma_device_type": 1 00:08:46.039 }, 00:08:46.039 { 00:08:46.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.039 "dma_device_type": 2 00:08:46.039 } 00:08:46.039 ], 00:08:46.039 "driver_specific": {} 00:08:46.039 } 00:08:46.039 ] 00:08:46.039 04:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.039 04:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:46.039 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:46.039 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:46.039 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:46.039 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.039 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:46.039 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:46.039 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.039 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.039 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.039 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.039 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.039 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.039 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.039 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.039 04:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.039 04:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.039 04:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.299 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.299 "name": "Existed_Raid", 00:08:46.299 "uuid": "e1ff1b24-5cb7-41a9-9601-31d808c66c01", 00:08:46.299 "strip_size_kb": 64, 00:08:46.299 "state": "online", 00:08:46.299 "raid_level": "concat", 00:08:46.299 "superblock": false, 00:08:46.299 "num_base_bdevs": 3, 00:08:46.299 "num_base_bdevs_discovered": 3, 00:08:46.299 "num_base_bdevs_operational": 3, 00:08:46.299 "base_bdevs_list": [ 00:08:46.299 { 00:08:46.299 "name": "BaseBdev1", 00:08:46.299 "uuid": "bfa43859-4908-408a-8fdb-c27f39ba8f45", 00:08:46.299 "is_configured": true, 00:08:46.299 "data_offset": 0, 00:08:46.299 "data_size": 65536 00:08:46.299 }, 00:08:46.299 { 00:08:46.299 "name": "BaseBdev2", 00:08:46.299 "uuid": "b093b9ef-8dfd-485d-a307-9e8cc3efa089", 00:08:46.299 "is_configured": true, 00:08:46.299 "data_offset": 0, 00:08:46.299 "data_size": 65536 00:08:46.299 }, 00:08:46.299 { 00:08:46.299 "name": "BaseBdev3", 00:08:46.299 "uuid": "2f1339b7-7165-4a0b-a201-d99c0bf97ea0", 00:08:46.299 "is_configured": true, 00:08:46.299 "data_offset": 0, 00:08:46.299 "data_size": 65536 00:08:46.299 } 00:08:46.299 ] 00:08:46.299 }' 00:08:46.299 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.299 04:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.559 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:46.559 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:46.559 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:46.559 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:46.559 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:46.559 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:46.559 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:46.559 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:46.559 04:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.559 04:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.559 [2024-12-13 04:24:46.434793] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:46.560 04:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.560 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:46.560 "name": "Existed_Raid", 00:08:46.560 "aliases": [ 00:08:46.560 "e1ff1b24-5cb7-41a9-9601-31d808c66c01" 00:08:46.560 ], 00:08:46.560 "product_name": "Raid Volume", 00:08:46.560 "block_size": 512, 00:08:46.560 "num_blocks": 196608, 00:08:46.560 "uuid": "e1ff1b24-5cb7-41a9-9601-31d808c66c01", 00:08:46.560 "assigned_rate_limits": { 00:08:46.560 "rw_ios_per_sec": 0, 00:08:46.560 "rw_mbytes_per_sec": 0, 00:08:46.560 "r_mbytes_per_sec": 0, 00:08:46.560 "w_mbytes_per_sec": 0 00:08:46.560 }, 00:08:46.560 "claimed": false, 00:08:46.560 "zoned": false, 00:08:46.560 "supported_io_types": { 00:08:46.560 "read": true, 00:08:46.560 "write": true, 00:08:46.560 "unmap": true, 00:08:46.560 "flush": true, 00:08:46.560 "reset": true, 00:08:46.560 "nvme_admin": false, 00:08:46.560 "nvme_io": false, 00:08:46.560 "nvme_io_md": false, 00:08:46.560 "write_zeroes": true, 00:08:46.560 "zcopy": false, 00:08:46.560 "get_zone_info": false, 00:08:46.560 "zone_management": false, 00:08:46.560 "zone_append": false, 00:08:46.560 "compare": false, 00:08:46.560 "compare_and_write": false, 00:08:46.560 "abort": false, 00:08:46.560 "seek_hole": false, 00:08:46.560 "seek_data": false, 00:08:46.560 "copy": false, 00:08:46.560 "nvme_iov_md": false 00:08:46.560 }, 00:08:46.560 "memory_domains": [ 00:08:46.560 { 00:08:46.560 "dma_device_id": "system", 00:08:46.560 "dma_device_type": 1 00:08:46.560 }, 00:08:46.560 { 00:08:46.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.560 "dma_device_type": 2 00:08:46.560 }, 00:08:46.560 { 00:08:46.560 "dma_device_id": "system", 00:08:46.560 "dma_device_type": 1 00:08:46.560 }, 00:08:46.560 { 00:08:46.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.560 "dma_device_type": 2 00:08:46.560 }, 00:08:46.560 { 00:08:46.560 "dma_device_id": "system", 00:08:46.560 "dma_device_type": 1 00:08:46.560 }, 00:08:46.560 { 00:08:46.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.560 "dma_device_type": 2 00:08:46.560 } 00:08:46.560 ], 00:08:46.560 "driver_specific": { 00:08:46.560 "raid": { 00:08:46.560 "uuid": "e1ff1b24-5cb7-41a9-9601-31d808c66c01", 00:08:46.560 "strip_size_kb": 64, 00:08:46.560 "state": "online", 00:08:46.560 "raid_level": "concat", 00:08:46.560 "superblock": false, 00:08:46.560 "num_base_bdevs": 3, 00:08:46.560 "num_base_bdevs_discovered": 3, 00:08:46.560 "num_base_bdevs_operational": 3, 00:08:46.560 "base_bdevs_list": [ 00:08:46.560 { 00:08:46.560 "name": "BaseBdev1", 00:08:46.560 "uuid": "bfa43859-4908-408a-8fdb-c27f39ba8f45", 00:08:46.560 "is_configured": true, 00:08:46.560 "data_offset": 0, 00:08:46.560 "data_size": 65536 00:08:46.560 }, 00:08:46.560 { 00:08:46.560 "name": "BaseBdev2", 00:08:46.560 "uuid": "b093b9ef-8dfd-485d-a307-9e8cc3efa089", 00:08:46.560 "is_configured": true, 00:08:46.560 "data_offset": 0, 00:08:46.560 "data_size": 65536 00:08:46.560 }, 00:08:46.560 { 00:08:46.560 "name": "BaseBdev3", 00:08:46.560 "uuid": "2f1339b7-7165-4a0b-a201-d99c0bf97ea0", 00:08:46.560 "is_configured": true, 00:08:46.560 "data_offset": 0, 00:08:46.560 "data_size": 65536 00:08:46.560 } 00:08:46.560 ] 00:08:46.560 } 00:08:46.560 } 00:08:46.560 }' 00:08:46.560 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:46.560 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:46.560 BaseBdev2 00:08:46.560 BaseBdev3' 00:08:46.560 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.560 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:46.560 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:46.560 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:46.560 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.560 04:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.560 04:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.560 04:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.820 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:46.820 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:46.820 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:46.820 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:46.820 04:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.820 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.820 04:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.820 04:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.820 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:46.820 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:46.820 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:46.820 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:46.820 04:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.820 04:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.820 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.820 04:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.820 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:46.820 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:46.820 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:46.820 04:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.820 04:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.820 [2024-12-13 04:24:46.702081] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:46.820 [2024-12-13 04:24:46.702113] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:46.820 [2024-12-13 04:24:46.702181] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:46.820 04:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.820 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:46.820 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:46.820 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:46.820 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:46.820 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:46.820 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:46.820 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.820 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:46.820 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:46.820 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.820 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:46.820 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.820 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.820 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.820 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.820 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.820 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.820 04:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.820 04:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.820 04:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.820 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.820 "name": "Existed_Raid", 00:08:46.820 "uuid": "e1ff1b24-5cb7-41a9-9601-31d808c66c01", 00:08:46.820 "strip_size_kb": 64, 00:08:46.820 "state": "offline", 00:08:46.820 "raid_level": "concat", 00:08:46.820 "superblock": false, 00:08:46.820 "num_base_bdevs": 3, 00:08:46.820 "num_base_bdevs_discovered": 2, 00:08:46.820 "num_base_bdevs_operational": 2, 00:08:46.820 "base_bdevs_list": [ 00:08:46.820 { 00:08:46.820 "name": null, 00:08:46.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.820 "is_configured": false, 00:08:46.820 "data_offset": 0, 00:08:46.820 "data_size": 65536 00:08:46.820 }, 00:08:46.820 { 00:08:46.820 "name": "BaseBdev2", 00:08:46.820 "uuid": "b093b9ef-8dfd-485d-a307-9e8cc3efa089", 00:08:46.820 "is_configured": true, 00:08:46.820 "data_offset": 0, 00:08:46.820 "data_size": 65536 00:08:46.820 }, 00:08:46.820 { 00:08:46.820 "name": "BaseBdev3", 00:08:46.820 "uuid": "2f1339b7-7165-4a0b-a201-d99c0bf97ea0", 00:08:46.820 "is_configured": true, 00:08:46.820 "data_offset": 0, 00:08:46.820 "data_size": 65536 00:08:46.820 } 00:08:46.820 ] 00:08:46.820 }' 00:08:46.820 04:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.820 04:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.390 [2024-12-13 04:24:47.234004] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.390 [2024-12-13 04:24:47.310225] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:47.390 [2024-12-13 04:24:47.310342] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.390 BaseBdev2 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.390 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.650 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.650 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:47.650 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.650 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.650 [ 00:08:47.650 { 00:08:47.650 "name": "BaseBdev2", 00:08:47.650 "aliases": [ 00:08:47.650 "884baffe-7c20-4d9d-980b-7d1817129505" 00:08:47.650 ], 00:08:47.650 "product_name": "Malloc disk", 00:08:47.650 "block_size": 512, 00:08:47.650 "num_blocks": 65536, 00:08:47.650 "uuid": "884baffe-7c20-4d9d-980b-7d1817129505", 00:08:47.650 "assigned_rate_limits": { 00:08:47.650 "rw_ios_per_sec": 0, 00:08:47.650 "rw_mbytes_per_sec": 0, 00:08:47.650 "r_mbytes_per_sec": 0, 00:08:47.650 "w_mbytes_per_sec": 0 00:08:47.650 }, 00:08:47.650 "claimed": false, 00:08:47.650 "zoned": false, 00:08:47.650 "supported_io_types": { 00:08:47.650 "read": true, 00:08:47.650 "write": true, 00:08:47.650 "unmap": true, 00:08:47.650 "flush": true, 00:08:47.650 "reset": true, 00:08:47.650 "nvme_admin": false, 00:08:47.650 "nvme_io": false, 00:08:47.650 "nvme_io_md": false, 00:08:47.650 "write_zeroes": true, 00:08:47.650 "zcopy": true, 00:08:47.650 "get_zone_info": false, 00:08:47.650 "zone_management": false, 00:08:47.650 "zone_append": false, 00:08:47.650 "compare": false, 00:08:47.650 "compare_and_write": false, 00:08:47.650 "abort": true, 00:08:47.650 "seek_hole": false, 00:08:47.650 "seek_data": false, 00:08:47.650 "copy": true, 00:08:47.650 "nvme_iov_md": false 00:08:47.650 }, 00:08:47.650 "memory_domains": [ 00:08:47.650 { 00:08:47.650 "dma_device_id": "system", 00:08:47.650 "dma_device_type": 1 00:08:47.650 }, 00:08:47.650 { 00:08:47.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.650 "dma_device_type": 2 00:08:47.650 } 00:08:47.650 ], 00:08:47.650 "driver_specific": {} 00:08:47.650 } 00:08:47.650 ] 00:08:47.650 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.650 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:47.650 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:47.650 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:47.650 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:47.650 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.650 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.650 BaseBdev3 00:08:47.650 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.650 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:47.650 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:47.650 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:47.650 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:47.650 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:47.650 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:47.650 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:47.650 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.650 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.650 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.650 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:47.650 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.651 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.651 [ 00:08:47.651 { 00:08:47.651 "name": "BaseBdev3", 00:08:47.651 "aliases": [ 00:08:47.651 "dee24b2e-275b-4452-9ea9-19d7b0c429b4" 00:08:47.651 ], 00:08:47.651 "product_name": "Malloc disk", 00:08:47.651 "block_size": 512, 00:08:47.651 "num_blocks": 65536, 00:08:47.651 "uuid": "dee24b2e-275b-4452-9ea9-19d7b0c429b4", 00:08:47.651 "assigned_rate_limits": { 00:08:47.651 "rw_ios_per_sec": 0, 00:08:47.651 "rw_mbytes_per_sec": 0, 00:08:47.651 "r_mbytes_per_sec": 0, 00:08:47.651 "w_mbytes_per_sec": 0 00:08:47.651 }, 00:08:47.651 "claimed": false, 00:08:47.651 "zoned": false, 00:08:47.651 "supported_io_types": { 00:08:47.651 "read": true, 00:08:47.651 "write": true, 00:08:47.651 "unmap": true, 00:08:47.651 "flush": true, 00:08:47.651 "reset": true, 00:08:47.651 "nvme_admin": false, 00:08:47.651 "nvme_io": false, 00:08:47.651 "nvme_io_md": false, 00:08:47.651 "write_zeroes": true, 00:08:47.651 "zcopy": true, 00:08:47.651 "get_zone_info": false, 00:08:47.651 "zone_management": false, 00:08:47.651 "zone_append": false, 00:08:47.651 "compare": false, 00:08:47.651 "compare_and_write": false, 00:08:47.651 "abort": true, 00:08:47.651 "seek_hole": false, 00:08:47.651 "seek_data": false, 00:08:47.651 "copy": true, 00:08:47.651 "nvme_iov_md": false 00:08:47.651 }, 00:08:47.651 "memory_domains": [ 00:08:47.651 { 00:08:47.651 "dma_device_id": "system", 00:08:47.651 "dma_device_type": 1 00:08:47.651 }, 00:08:47.651 { 00:08:47.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.651 "dma_device_type": 2 00:08:47.651 } 00:08:47.651 ], 00:08:47.651 "driver_specific": {} 00:08:47.651 } 00:08:47.651 ] 00:08:47.651 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.651 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:47.651 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:47.651 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:47.651 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:47.651 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.651 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.651 [2024-12-13 04:24:47.502857] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:47.651 [2024-12-13 04:24:47.502974] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:47.651 [2024-12-13 04:24:47.503016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:47.651 [2024-12-13 04:24:47.505102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:47.651 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.651 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:47.651 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.651 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.651 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:47.651 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.651 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.651 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.651 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.651 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.651 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.651 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.651 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.651 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.651 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.651 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.651 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.651 "name": "Existed_Raid", 00:08:47.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.651 "strip_size_kb": 64, 00:08:47.651 "state": "configuring", 00:08:47.651 "raid_level": "concat", 00:08:47.651 "superblock": false, 00:08:47.651 "num_base_bdevs": 3, 00:08:47.651 "num_base_bdevs_discovered": 2, 00:08:47.651 "num_base_bdevs_operational": 3, 00:08:47.651 "base_bdevs_list": [ 00:08:47.651 { 00:08:47.651 "name": "BaseBdev1", 00:08:47.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.651 "is_configured": false, 00:08:47.651 "data_offset": 0, 00:08:47.651 "data_size": 0 00:08:47.651 }, 00:08:47.651 { 00:08:47.651 "name": "BaseBdev2", 00:08:47.651 "uuid": "884baffe-7c20-4d9d-980b-7d1817129505", 00:08:47.651 "is_configured": true, 00:08:47.651 "data_offset": 0, 00:08:47.651 "data_size": 65536 00:08:47.651 }, 00:08:47.651 { 00:08:47.651 "name": "BaseBdev3", 00:08:47.651 "uuid": "dee24b2e-275b-4452-9ea9-19d7b0c429b4", 00:08:47.651 "is_configured": true, 00:08:47.651 "data_offset": 0, 00:08:47.651 "data_size": 65536 00:08:47.651 } 00:08:47.651 ] 00:08:47.651 }' 00:08:47.651 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.651 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.911 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:47.911 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.911 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.911 [2024-12-13 04:24:47.902154] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:47.911 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.911 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:47.911 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.911 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.911 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:47.911 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.911 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.911 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.911 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.911 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.911 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.911 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.911 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.911 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.911 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.171 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.171 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.171 "name": "Existed_Raid", 00:08:48.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.171 "strip_size_kb": 64, 00:08:48.171 "state": "configuring", 00:08:48.171 "raid_level": "concat", 00:08:48.171 "superblock": false, 00:08:48.171 "num_base_bdevs": 3, 00:08:48.171 "num_base_bdevs_discovered": 1, 00:08:48.171 "num_base_bdevs_operational": 3, 00:08:48.171 "base_bdevs_list": [ 00:08:48.171 { 00:08:48.171 "name": "BaseBdev1", 00:08:48.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.171 "is_configured": false, 00:08:48.171 "data_offset": 0, 00:08:48.171 "data_size": 0 00:08:48.171 }, 00:08:48.171 { 00:08:48.171 "name": null, 00:08:48.171 "uuid": "884baffe-7c20-4d9d-980b-7d1817129505", 00:08:48.171 "is_configured": false, 00:08:48.171 "data_offset": 0, 00:08:48.171 "data_size": 65536 00:08:48.171 }, 00:08:48.171 { 00:08:48.171 "name": "BaseBdev3", 00:08:48.171 "uuid": "dee24b2e-275b-4452-9ea9-19d7b0c429b4", 00:08:48.171 "is_configured": true, 00:08:48.171 "data_offset": 0, 00:08:48.171 "data_size": 65536 00:08:48.171 } 00:08:48.171 ] 00:08:48.171 }' 00:08:48.171 04:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.171 04:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.431 04:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:48.431 04:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.431 04:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.431 04:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.431 04:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.431 04:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:48.431 04:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:48.431 04:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.431 04:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.431 [2024-12-13 04:24:48.382137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:48.431 BaseBdev1 00:08:48.431 04:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.431 04:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:48.431 04:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:48.431 04:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:48.431 04:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:48.431 04:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:48.431 04:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:48.431 04:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:48.431 04:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.431 04:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.431 04:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.431 04:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:48.431 04:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.431 04:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.431 [ 00:08:48.431 { 00:08:48.431 "name": "BaseBdev1", 00:08:48.431 "aliases": [ 00:08:48.431 "1f04062f-3862-4c50-ac24-13180eae678b" 00:08:48.431 ], 00:08:48.431 "product_name": "Malloc disk", 00:08:48.431 "block_size": 512, 00:08:48.431 "num_blocks": 65536, 00:08:48.431 "uuid": "1f04062f-3862-4c50-ac24-13180eae678b", 00:08:48.431 "assigned_rate_limits": { 00:08:48.431 "rw_ios_per_sec": 0, 00:08:48.431 "rw_mbytes_per_sec": 0, 00:08:48.431 "r_mbytes_per_sec": 0, 00:08:48.431 "w_mbytes_per_sec": 0 00:08:48.431 }, 00:08:48.431 "claimed": true, 00:08:48.431 "claim_type": "exclusive_write", 00:08:48.431 "zoned": false, 00:08:48.431 "supported_io_types": { 00:08:48.431 "read": true, 00:08:48.431 "write": true, 00:08:48.431 "unmap": true, 00:08:48.431 "flush": true, 00:08:48.431 "reset": true, 00:08:48.431 "nvme_admin": false, 00:08:48.431 "nvme_io": false, 00:08:48.431 "nvme_io_md": false, 00:08:48.431 "write_zeroes": true, 00:08:48.431 "zcopy": true, 00:08:48.431 "get_zone_info": false, 00:08:48.431 "zone_management": false, 00:08:48.431 "zone_append": false, 00:08:48.431 "compare": false, 00:08:48.431 "compare_and_write": false, 00:08:48.431 "abort": true, 00:08:48.431 "seek_hole": false, 00:08:48.431 "seek_data": false, 00:08:48.431 "copy": true, 00:08:48.431 "nvme_iov_md": false 00:08:48.431 }, 00:08:48.431 "memory_domains": [ 00:08:48.431 { 00:08:48.431 "dma_device_id": "system", 00:08:48.431 "dma_device_type": 1 00:08:48.431 }, 00:08:48.431 { 00:08:48.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.431 "dma_device_type": 2 00:08:48.431 } 00:08:48.431 ], 00:08:48.431 "driver_specific": {} 00:08:48.431 } 00:08:48.431 ] 00:08:48.431 04:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.431 04:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:48.431 04:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:48.431 04:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.431 04:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.431 04:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:48.431 04:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.431 04:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.431 04:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.431 04:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.431 04:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.431 04:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.431 04:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.431 04:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.431 04:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.431 04:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.691 04:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.691 04:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.691 "name": "Existed_Raid", 00:08:48.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.691 "strip_size_kb": 64, 00:08:48.691 "state": "configuring", 00:08:48.691 "raid_level": "concat", 00:08:48.691 "superblock": false, 00:08:48.691 "num_base_bdevs": 3, 00:08:48.691 "num_base_bdevs_discovered": 2, 00:08:48.691 "num_base_bdevs_operational": 3, 00:08:48.691 "base_bdevs_list": [ 00:08:48.691 { 00:08:48.691 "name": "BaseBdev1", 00:08:48.691 "uuid": "1f04062f-3862-4c50-ac24-13180eae678b", 00:08:48.691 "is_configured": true, 00:08:48.691 "data_offset": 0, 00:08:48.691 "data_size": 65536 00:08:48.691 }, 00:08:48.691 { 00:08:48.691 "name": null, 00:08:48.691 "uuid": "884baffe-7c20-4d9d-980b-7d1817129505", 00:08:48.691 "is_configured": false, 00:08:48.691 "data_offset": 0, 00:08:48.691 "data_size": 65536 00:08:48.691 }, 00:08:48.691 { 00:08:48.691 "name": "BaseBdev3", 00:08:48.691 "uuid": "dee24b2e-275b-4452-9ea9-19d7b0c429b4", 00:08:48.691 "is_configured": true, 00:08:48.691 "data_offset": 0, 00:08:48.691 "data_size": 65536 00:08:48.691 } 00:08:48.691 ] 00:08:48.691 }' 00:08:48.691 04:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.691 04:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.951 04:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.951 04:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:48.951 04:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.951 04:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.951 04:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.951 04:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:48.951 04:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:48.951 04:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.951 04:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.951 [2024-12-13 04:24:48.913259] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:48.951 04:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.951 04:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:48.951 04:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.951 04:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.951 04:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:48.951 04:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.951 04:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.951 04:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.951 04:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.951 04:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.951 04:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.951 04:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.951 04:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.951 04:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.951 04:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.951 04:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.211 04:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.211 "name": "Existed_Raid", 00:08:49.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.211 "strip_size_kb": 64, 00:08:49.211 "state": "configuring", 00:08:49.211 "raid_level": "concat", 00:08:49.211 "superblock": false, 00:08:49.211 "num_base_bdevs": 3, 00:08:49.211 "num_base_bdevs_discovered": 1, 00:08:49.211 "num_base_bdevs_operational": 3, 00:08:49.211 "base_bdevs_list": [ 00:08:49.211 { 00:08:49.211 "name": "BaseBdev1", 00:08:49.211 "uuid": "1f04062f-3862-4c50-ac24-13180eae678b", 00:08:49.211 "is_configured": true, 00:08:49.211 "data_offset": 0, 00:08:49.211 "data_size": 65536 00:08:49.211 }, 00:08:49.211 { 00:08:49.211 "name": null, 00:08:49.211 "uuid": "884baffe-7c20-4d9d-980b-7d1817129505", 00:08:49.211 "is_configured": false, 00:08:49.211 "data_offset": 0, 00:08:49.211 "data_size": 65536 00:08:49.211 }, 00:08:49.211 { 00:08:49.211 "name": null, 00:08:49.211 "uuid": "dee24b2e-275b-4452-9ea9-19d7b0c429b4", 00:08:49.211 "is_configured": false, 00:08:49.211 "data_offset": 0, 00:08:49.211 "data_size": 65536 00:08:49.211 } 00:08:49.211 ] 00:08:49.211 }' 00:08:49.211 04:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.211 04:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.470 04:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.470 04:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:49.470 04:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.470 04:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.470 04:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.470 04:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:49.470 04:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:49.470 04:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.470 04:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.470 [2024-12-13 04:24:49.416474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:49.470 04:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.470 04:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:49.470 04:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.470 04:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.470 04:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:49.470 04:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.470 04:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.470 04:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.470 04:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.470 04:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.470 04:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.470 04:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.471 04:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.471 04:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.471 04:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.471 04:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.471 04:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.471 "name": "Existed_Raid", 00:08:49.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.471 "strip_size_kb": 64, 00:08:49.471 "state": "configuring", 00:08:49.471 "raid_level": "concat", 00:08:49.471 "superblock": false, 00:08:49.471 "num_base_bdevs": 3, 00:08:49.471 "num_base_bdevs_discovered": 2, 00:08:49.471 "num_base_bdevs_operational": 3, 00:08:49.471 "base_bdevs_list": [ 00:08:49.471 { 00:08:49.471 "name": "BaseBdev1", 00:08:49.471 "uuid": "1f04062f-3862-4c50-ac24-13180eae678b", 00:08:49.471 "is_configured": true, 00:08:49.471 "data_offset": 0, 00:08:49.471 "data_size": 65536 00:08:49.471 }, 00:08:49.471 { 00:08:49.471 "name": null, 00:08:49.471 "uuid": "884baffe-7c20-4d9d-980b-7d1817129505", 00:08:49.471 "is_configured": false, 00:08:49.471 "data_offset": 0, 00:08:49.471 "data_size": 65536 00:08:49.471 }, 00:08:49.471 { 00:08:49.471 "name": "BaseBdev3", 00:08:49.471 "uuid": "dee24b2e-275b-4452-9ea9-19d7b0c429b4", 00:08:49.471 "is_configured": true, 00:08:49.471 "data_offset": 0, 00:08:49.471 "data_size": 65536 00:08:49.471 } 00:08:49.471 ] 00:08:49.471 }' 00:08:49.471 04:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.471 04:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.040 04:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.040 04:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.040 04:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.040 04:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:50.040 04:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.040 04:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:50.040 04:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:50.040 04:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.040 04:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.040 [2024-12-13 04:24:49.923608] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:50.040 04:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.040 04:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:50.040 04:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.040 04:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.040 04:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:50.040 04:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.040 04:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.040 04:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.040 04:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.040 04:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.040 04:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.040 04:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.040 04:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.040 04:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.040 04:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.040 04:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.040 04:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.040 "name": "Existed_Raid", 00:08:50.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.040 "strip_size_kb": 64, 00:08:50.040 "state": "configuring", 00:08:50.040 "raid_level": "concat", 00:08:50.040 "superblock": false, 00:08:50.040 "num_base_bdevs": 3, 00:08:50.040 "num_base_bdevs_discovered": 1, 00:08:50.040 "num_base_bdevs_operational": 3, 00:08:50.040 "base_bdevs_list": [ 00:08:50.040 { 00:08:50.040 "name": null, 00:08:50.040 "uuid": "1f04062f-3862-4c50-ac24-13180eae678b", 00:08:50.040 "is_configured": false, 00:08:50.040 "data_offset": 0, 00:08:50.040 "data_size": 65536 00:08:50.040 }, 00:08:50.040 { 00:08:50.040 "name": null, 00:08:50.040 "uuid": "884baffe-7c20-4d9d-980b-7d1817129505", 00:08:50.040 "is_configured": false, 00:08:50.040 "data_offset": 0, 00:08:50.040 "data_size": 65536 00:08:50.040 }, 00:08:50.040 { 00:08:50.040 "name": "BaseBdev3", 00:08:50.040 "uuid": "dee24b2e-275b-4452-9ea9-19d7b0c429b4", 00:08:50.040 "is_configured": true, 00:08:50.040 "data_offset": 0, 00:08:50.040 "data_size": 65536 00:08:50.040 } 00:08:50.040 ] 00:08:50.040 }' 00:08:50.040 04:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.040 04:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.609 04:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.609 04:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.609 04:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.609 04:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:50.609 04:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.609 04:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:50.609 04:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:50.609 04:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.609 04:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.609 [2024-12-13 04:24:50.438582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:50.609 04:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.609 04:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:50.609 04:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.609 04:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.609 04:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:50.609 04:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.609 04:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.609 04:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.609 04:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.609 04:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.609 04:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.609 04:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.609 04:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.609 04:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.609 04:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.609 04:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.609 04:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.609 "name": "Existed_Raid", 00:08:50.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.609 "strip_size_kb": 64, 00:08:50.609 "state": "configuring", 00:08:50.609 "raid_level": "concat", 00:08:50.609 "superblock": false, 00:08:50.609 "num_base_bdevs": 3, 00:08:50.609 "num_base_bdevs_discovered": 2, 00:08:50.609 "num_base_bdevs_operational": 3, 00:08:50.609 "base_bdevs_list": [ 00:08:50.609 { 00:08:50.609 "name": null, 00:08:50.609 "uuid": "1f04062f-3862-4c50-ac24-13180eae678b", 00:08:50.609 "is_configured": false, 00:08:50.609 "data_offset": 0, 00:08:50.609 "data_size": 65536 00:08:50.609 }, 00:08:50.609 { 00:08:50.609 "name": "BaseBdev2", 00:08:50.609 "uuid": "884baffe-7c20-4d9d-980b-7d1817129505", 00:08:50.609 "is_configured": true, 00:08:50.609 "data_offset": 0, 00:08:50.609 "data_size": 65536 00:08:50.609 }, 00:08:50.609 { 00:08:50.609 "name": "BaseBdev3", 00:08:50.609 "uuid": "dee24b2e-275b-4452-9ea9-19d7b0c429b4", 00:08:50.609 "is_configured": true, 00:08:50.609 "data_offset": 0, 00:08:50.609 "data_size": 65536 00:08:50.609 } 00:08:50.609 ] 00:08:50.609 }' 00:08:50.609 04:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.609 04:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.869 04:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.869 04:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.869 04:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.869 04:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:50.869 04:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.869 04:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:50.869 04:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:50.869 04:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.869 04:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.869 04:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.128 04:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.128 04:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1f04062f-3862-4c50-ac24-13180eae678b 00:08:51.128 04:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.128 04:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.128 [2024-12-13 04:24:50.914394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:51.128 [2024-12-13 04:24:50.914457] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:51.128 [2024-12-13 04:24:50.914470] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:51.128 [2024-12-13 04:24:50.914736] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:08:51.128 [2024-12-13 04:24:50.914889] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:51.128 [2024-12-13 04:24:50.914899] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:08:51.128 [2024-12-13 04:24:50.915104] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:51.128 NewBaseBdev 00:08:51.128 04:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.128 04:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:51.128 04:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:51.128 04:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:51.128 04:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:51.128 04:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:51.128 04:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:51.128 04:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:51.128 04:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.128 04:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.128 04:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.129 04:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:51.129 04:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.129 04:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.129 [ 00:08:51.129 { 00:08:51.129 "name": "NewBaseBdev", 00:08:51.129 "aliases": [ 00:08:51.129 "1f04062f-3862-4c50-ac24-13180eae678b" 00:08:51.129 ], 00:08:51.129 "product_name": "Malloc disk", 00:08:51.129 "block_size": 512, 00:08:51.129 "num_blocks": 65536, 00:08:51.129 "uuid": "1f04062f-3862-4c50-ac24-13180eae678b", 00:08:51.129 "assigned_rate_limits": { 00:08:51.129 "rw_ios_per_sec": 0, 00:08:51.129 "rw_mbytes_per_sec": 0, 00:08:51.129 "r_mbytes_per_sec": 0, 00:08:51.129 "w_mbytes_per_sec": 0 00:08:51.129 }, 00:08:51.129 "claimed": true, 00:08:51.129 "claim_type": "exclusive_write", 00:08:51.129 "zoned": false, 00:08:51.129 "supported_io_types": { 00:08:51.129 "read": true, 00:08:51.129 "write": true, 00:08:51.129 "unmap": true, 00:08:51.129 "flush": true, 00:08:51.129 "reset": true, 00:08:51.129 "nvme_admin": false, 00:08:51.129 "nvme_io": false, 00:08:51.129 "nvme_io_md": false, 00:08:51.129 "write_zeroes": true, 00:08:51.129 "zcopy": true, 00:08:51.129 "get_zone_info": false, 00:08:51.129 "zone_management": false, 00:08:51.129 "zone_append": false, 00:08:51.129 "compare": false, 00:08:51.129 "compare_and_write": false, 00:08:51.129 "abort": true, 00:08:51.129 "seek_hole": false, 00:08:51.129 "seek_data": false, 00:08:51.129 "copy": true, 00:08:51.129 "nvme_iov_md": false 00:08:51.129 }, 00:08:51.129 "memory_domains": [ 00:08:51.129 { 00:08:51.129 "dma_device_id": "system", 00:08:51.129 "dma_device_type": 1 00:08:51.129 }, 00:08:51.129 { 00:08:51.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.129 "dma_device_type": 2 00:08:51.129 } 00:08:51.129 ], 00:08:51.129 "driver_specific": {} 00:08:51.129 } 00:08:51.129 ] 00:08:51.129 04:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.129 04:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:51.129 04:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:51.129 04:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.129 04:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:51.129 04:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:51.129 04:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.129 04:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.129 04:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.129 04:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.129 04:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.129 04:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.129 04:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.129 04:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.129 04:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.129 04:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.129 04:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.129 04:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.129 "name": "Existed_Raid", 00:08:51.129 "uuid": "e5820f9a-1a77-4bb6-a5c4-433a884719f3", 00:08:51.129 "strip_size_kb": 64, 00:08:51.129 "state": "online", 00:08:51.129 "raid_level": "concat", 00:08:51.129 "superblock": false, 00:08:51.129 "num_base_bdevs": 3, 00:08:51.129 "num_base_bdevs_discovered": 3, 00:08:51.129 "num_base_bdevs_operational": 3, 00:08:51.129 "base_bdevs_list": [ 00:08:51.129 { 00:08:51.129 "name": "NewBaseBdev", 00:08:51.129 "uuid": "1f04062f-3862-4c50-ac24-13180eae678b", 00:08:51.129 "is_configured": true, 00:08:51.129 "data_offset": 0, 00:08:51.129 "data_size": 65536 00:08:51.129 }, 00:08:51.129 { 00:08:51.129 "name": "BaseBdev2", 00:08:51.129 "uuid": "884baffe-7c20-4d9d-980b-7d1817129505", 00:08:51.129 "is_configured": true, 00:08:51.129 "data_offset": 0, 00:08:51.129 "data_size": 65536 00:08:51.129 }, 00:08:51.129 { 00:08:51.129 "name": "BaseBdev3", 00:08:51.129 "uuid": "dee24b2e-275b-4452-9ea9-19d7b0c429b4", 00:08:51.129 "is_configured": true, 00:08:51.129 "data_offset": 0, 00:08:51.129 "data_size": 65536 00:08:51.129 } 00:08:51.129 ] 00:08:51.129 }' 00:08:51.129 04:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.129 04:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.389 04:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:51.389 04:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:51.389 04:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:51.389 04:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:51.389 04:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:51.389 04:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:51.389 04:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:51.389 04:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.389 04:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.389 04:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:51.389 [2024-12-13 04:24:51.369958] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:51.389 04:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.649 04:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:51.649 "name": "Existed_Raid", 00:08:51.649 "aliases": [ 00:08:51.649 "e5820f9a-1a77-4bb6-a5c4-433a884719f3" 00:08:51.649 ], 00:08:51.649 "product_name": "Raid Volume", 00:08:51.649 "block_size": 512, 00:08:51.649 "num_blocks": 196608, 00:08:51.649 "uuid": "e5820f9a-1a77-4bb6-a5c4-433a884719f3", 00:08:51.649 "assigned_rate_limits": { 00:08:51.649 "rw_ios_per_sec": 0, 00:08:51.649 "rw_mbytes_per_sec": 0, 00:08:51.649 "r_mbytes_per_sec": 0, 00:08:51.649 "w_mbytes_per_sec": 0 00:08:51.649 }, 00:08:51.649 "claimed": false, 00:08:51.649 "zoned": false, 00:08:51.649 "supported_io_types": { 00:08:51.649 "read": true, 00:08:51.649 "write": true, 00:08:51.649 "unmap": true, 00:08:51.649 "flush": true, 00:08:51.649 "reset": true, 00:08:51.649 "nvme_admin": false, 00:08:51.649 "nvme_io": false, 00:08:51.649 "nvme_io_md": false, 00:08:51.649 "write_zeroes": true, 00:08:51.649 "zcopy": false, 00:08:51.649 "get_zone_info": false, 00:08:51.649 "zone_management": false, 00:08:51.649 "zone_append": false, 00:08:51.649 "compare": false, 00:08:51.649 "compare_and_write": false, 00:08:51.649 "abort": false, 00:08:51.649 "seek_hole": false, 00:08:51.649 "seek_data": false, 00:08:51.649 "copy": false, 00:08:51.649 "nvme_iov_md": false 00:08:51.649 }, 00:08:51.649 "memory_domains": [ 00:08:51.649 { 00:08:51.649 "dma_device_id": "system", 00:08:51.649 "dma_device_type": 1 00:08:51.649 }, 00:08:51.649 { 00:08:51.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.649 "dma_device_type": 2 00:08:51.649 }, 00:08:51.649 { 00:08:51.649 "dma_device_id": "system", 00:08:51.649 "dma_device_type": 1 00:08:51.649 }, 00:08:51.649 { 00:08:51.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.649 "dma_device_type": 2 00:08:51.649 }, 00:08:51.649 { 00:08:51.649 "dma_device_id": "system", 00:08:51.649 "dma_device_type": 1 00:08:51.649 }, 00:08:51.649 { 00:08:51.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.649 "dma_device_type": 2 00:08:51.649 } 00:08:51.649 ], 00:08:51.649 "driver_specific": { 00:08:51.649 "raid": { 00:08:51.649 "uuid": "e5820f9a-1a77-4bb6-a5c4-433a884719f3", 00:08:51.649 "strip_size_kb": 64, 00:08:51.649 "state": "online", 00:08:51.649 "raid_level": "concat", 00:08:51.649 "superblock": false, 00:08:51.649 "num_base_bdevs": 3, 00:08:51.649 "num_base_bdevs_discovered": 3, 00:08:51.649 "num_base_bdevs_operational": 3, 00:08:51.649 "base_bdevs_list": [ 00:08:51.649 { 00:08:51.649 "name": "NewBaseBdev", 00:08:51.649 "uuid": "1f04062f-3862-4c50-ac24-13180eae678b", 00:08:51.649 "is_configured": true, 00:08:51.649 "data_offset": 0, 00:08:51.649 "data_size": 65536 00:08:51.649 }, 00:08:51.649 { 00:08:51.649 "name": "BaseBdev2", 00:08:51.649 "uuid": "884baffe-7c20-4d9d-980b-7d1817129505", 00:08:51.649 "is_configured": true, 00:08:51.649 "data_offset": 0, 00:08:51.649 "data_size": 65536 00:08:51.649 }, 00:08:51.649 { 00:08:51.649 "name": "BaseBdev3", 00:08:51.649 "uuid": "dee24b2e-275b-4452-9ea9-19d7b0c429b4", 00:08:51.649 "is_configured": true, 00:08:51.649 "data_offset": 0, 00:08:51.649 "data_size": 65536 00:08:51.649 } 00:08:51.649 ] 00:08:51.649 } 00:08:51.649 } 00:08:51.649 }' 00:08:51.649 04:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:51.649 04:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:51.649 BaseBdev2 00:08:51.649 BaseBdev3' 00:08:51.649 04:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.649 04:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:51.649 04:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:51.649 04:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.649 04:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:51.649 04:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.649 04:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.649 04:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.649 04:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.649 04:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.649 04:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:51.649 04:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.649 04:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:51.649 04:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.649 04:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.649 04:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.649 04:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.649 04:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.649 04:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:51.649 04:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.649 04:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:51.649 04:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.649 04:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.649 04:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.649 04:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.649 04:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.649 04:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:51.649 04:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.649 04:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.649 [2024-12-13 04:24:51.621190] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:51.649 [2024-12-13 04:24:51.621268] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:51.649 [2024-12-13 04:24:51.621353] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:51.649 [2024-12-13 04:24:51.621430] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:51.649 [2024-12-13 04:24:51.621443] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:08:51.649 04:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.649 04:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 78410 00:08:51.649 04:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 78410 ']' 00:08:51.649 04:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 78410 00:08:51.649 04:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:51.649 04:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:51.649 04:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78410 00:08:51.649 04:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:51.649 04:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:51.649 04:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78410' 00:08:51.649 killing process with pid 78410 00:08:51.649 04:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 78410 00:08:51.649 [2024-12-13 04:24:51.663769] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:51.909 04:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 78410 00:08:51.909 [2024-12-13 04:24:51.720393] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:52.168 04:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:52.168 00:08:52.168 real 0m8.932s 00:08:52.168 user 0m14.933s 00:08:52.168 sys 0m1.945s 00:08:52.168 04:24:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:52.168 04:24:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.168 ************************************ 00:08:52.168 END TEST raid_state_function_test 00:08:52.168 ************************************ 00:08:52.168 04:24:52 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:08:52.168 04:24:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:52.168 04:24:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.168 04:24:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:52.168 ************************************ 00:08:52.168 START TEST raid_state_function_test_sb 00:08:52.168 ************************************ 00:08:52.168 04:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:08:52.168 04:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:52.168 04:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:52.168 04:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:52.168 04:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:52.168 04:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:52.168 04:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:52.168 04:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:52.168 04:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:52.168 04:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:52.168 04:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:52.168 04:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:52.168 04:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:52.169 04:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:52.169 04:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:52.169 04:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:52.169 04:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:52.169 04:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:52.169 04:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:52.169 04:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:52.169 04:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:52.169 04:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:52.169 04:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:52.169 04:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:52.169 04:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:52.169 04:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:52.169 04:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:52.169 04:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=79020 00:08:52.169 04:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:52.169 04:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79020' 00:08:52.169 Process raid pid: 79020 00:08:52.169 04:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 79020 00:08:52.169 04:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 79020 ']' 00:08:52.169 04:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.169 04:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:52.169 04:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.169 04:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:52.169 04:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.429 [2024-12-13 04:24:52.220848] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:52.429 [2024-12-13 04:24:52.221058] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:52.429 [2024-12-13 04:24:52.355231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.429 [2024-12-13 04:24:52.394906] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.688 [2024-12-13 04:24:52.470995] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:52.688 [2024-12-13 04:24:52.471109] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:53.257 04:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:53.257 04:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:53.257 04:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:53.257 04:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.257 04:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.257 [2024-12-13 04:24:53.065106] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:53.257 [2024-12-13 04:24:53.065249] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:53.257 [2024-12-13 04:24:53.065274] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:53.257 [2024-12-13 04:24:53.065286] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:53.257 [2024-12-13 04:24:53.065293] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:53.257 [2024-12-13 04:24:53.065305] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:53.257 04:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.257 04:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:53.257 04:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.257 04:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.257 04:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:53.257 04:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.257 04:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.257 04:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.257 04:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.257 04:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.257 04:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.258 04:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.258 04:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.258 04:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.258 04:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.258 04:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.258 04:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.258 "name": "Existed_Raid", 00:08:53.258 "uuid": "b8ad9b4f-8282-4be5-8077-bd2982d4cbc0", 00:08:53.258 "strip_size_kb": 64, 00:08:53.258 "state": "configuring", 00:08:53.258 "raid_level": "concat", 00:08:53.258 "superblock": true, 00:08:53.258 "num_base_bdevs": 3, 00:08:53.258 "num_base_bdevs_discovered": 0, 00:08:53.258 "num_base_bdevs_operational": 3, 00:08:53.258 "base_bdevs_list": [ 00:08:53.258 { 00:08:53.258 "name": "BaseBdev1", 00:08:53.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.258 "is_configured": false, 00:08:53.258 "data_offset": 0, 00:08:53.258 "data_size": 0 00:08:53.258 }, 00:08:53.258 { 00:08:53.258 "name": "BaseBdev2", 00:08:53.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.258 "is_configured": false, 00:08:53.258 "data_offset": 0, 00:08:53.258 "data_size": 0 00:08:53.258 }, 00:08:53.258 { 00:08:53.258 "name": "BaseBdev3", 00:08:53.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.258 "is_configured": false, 00:08:53.258 "data_offset": 0, 00:08:53.258 "data_size": 0 00:08:53.258 } 00:08:53.258 ] 00:08:53.258 }' 00:08:53.258 04:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.258 04:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.517 04:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:53.517 04:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.517 04:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.517 [2024-12-13 04:24:53.496272] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:53.517 [2024-12-13 04:24:53.496413] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:08:53.517 04:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.517 04:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:53.517 04:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.517 04:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.517 [2024-12-13 04:24:53.508259] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:53.517 [2024-12-13 04:24:53.508342] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:53.517 [2024-12-13 04:24:53.508375] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:53.517 [2024-12-13 04:24:53.508413] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:53.517 [2024-12-13 04:24:53.508431] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:53.517 [2024-12-13 04:24:53.508452] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:53.517 04:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.517 04:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:53.517 04:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.517 04:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.776 [2024-12-13 04:24:53.535266] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:53.776 BaseBdev1 00:08:53.776 04:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.776 04:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:53.776 04:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:53.776 04:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:53.776 04:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:53.776 04:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:53.776 04:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:53.776 04:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:53.776 04:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.776 04:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.776 04:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.776 04:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:53.776 04:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.776 04:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.776 [ 00:08:53.776 { 00:08:53.776 "name": "BaseBdev1", 00:08:53.776 "aliases": [ 00:08:53.776 "8540becc-b47a-446f-b35d-22eda48f2882" 00:08:53.776 ], 00:08:53.776 "product_name": "Malloc disk", 00:08:53.776 "block_size": 512, 00:08:53.776 "num_blocks": 65536, 00:08:53.776 "uuid": "8540becc-b47a-446f-b35d-22eda48f2882", 00:08:53.776 "assigned_rate_limits": { 00:08:53.776 "rw_ios_per_sec": 0, 00:08:53.776 "rw_mbytes_per_sec": 0, 00:08:53.776 "r_mbytes_per_sec": 0, 00:08:53.776 "w_mbytes_per_sec": 0 00:08:53.776 }, 00:08:53.776 "claimed": true, 00:08:53.776 "claim_type": "exclusive_write", 00:08:53.776 "zoned": false, 00:08:53.776 "supported_io_types": { 00:08:53.776 "read": true, 00:08:53.776 "write": true, 00:08:53.776 "unmap": true, 00:08:53.776 "flush": true, 00:08:53.776 "reset": true, 00:08:53.776 "nvme_admin": false, 00:08:53.776 "nvme_io": false, 00:08:53.776 "nvme_io_md": false, 00:08:53.776 "write_zeroes": true, 00:08:53.776 "zcopy": true, 00:08:53.776 "get_zone_info": false, 00:08:53.776 "zone_management": false, 00:08:53.776 "zone_append": false, 00:08:53.776 "compare": false, 00:08:53.776 "compare_and_write": false, 00:08:53.776 "abort": true, 00:08:53.776 "seek_hole": false, 00:08:53.776 "seek_data": false, 00:08:53.776 "copy": true, 00:08:53.776 "nvme_iov_md": false 00:08:53.776 }, 00:08:53.776 "memory_domains": [ 00:08:53.776 { 00:08:53.776 "dma_device_id": "system", 00:08:53.776 "dma_device_type": 1 00:08:53.776 }, 00:08:53.776 { 00:08:53.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.776 "dma_device_type": 2 00:08:53.777 } 00:08:53.777 ], 00:08:53.777 "driver_specific": {} 00:08:53.777 } 00:08:53.777 ] 00:08:53.777 04:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.777 04:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:53.777 04:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:53.777 04:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.777 04:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.777 04:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:53.777 04:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.777 04:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.777 04:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.777 04:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.777 04:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.777 04:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.777 04:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.777 04:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.777 04:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.777 04:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.777 04:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.777 04:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.777 "name": "Existed_Raid", 00:08:53.777 "uuid": "7ddd5ec1-ca1f-4fc5-9ba4-23b4dc17787d", 00:08:53.777 "strip_size_kb": 64, 00:08:53.777 "state": "configuring", 00:08:53.777 "raid_level": "concat", 00:08:53.777 "superblock": true, 00:08:53.777 "num_base_bdevs": 3, 00:08:53.777 "num_base_bdevs_discovered": 1, 00:08:53.777 "num_base_bdevs_operational": 3, 00:08:53.777 "base_bdevs_list": [ 00:08:53.777 { 00:08:53.777 "name": "BaseBdev1", 00:08:53.777 "uuid": "8540becc-b47a-446f-b35d-22eda48f2882", 00:08:53.777 "is_configured": true, 00:08:53.777 "data_offset": 2048, 00:08:53.777 "data_size": 63488 00:08:53.777 }, 00:08:53.777 { 00:08:53.777 "name": "BaseBdev2", 00:08:53.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.777 "is_configured": false, 00:08:53.777 "data_offset": 0, 00:08:53.777 "data_size": 0 00:08:53.777 }, 00:08:53.777 { 00:08:53.777 "name": "BaseBdev3", 00:08:53.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.777 "is_configured": false, 00:08:53.777 "data_offset": 0, 00:08:53.777 "data_size": 0 00:08:53.777 } 00:08:53.777 ] 00:08:53.777 }' 00:08:53.777 04:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.777 04:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.036 04:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:54.036 04:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.036 04:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.036 [2024-12-13 04:24:54.030494] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:54.036 [2024-12-13 04:24:54.030559] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:08:54.036 04:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.036 04:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:54.036 04:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.036 04:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.036 [2024-12-13 04:24:54.042498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:54.036 [2024-12-13 04:24:54.044682] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:54.036 [2024-12-13 04:24:54.044724] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:54.036 [2024-12-13 04:24:54.044734] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:54.036 [2024-12-13 04:24:54.044744] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:54.036 04:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.036 04:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:54.036 04:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:54.036 04:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:54.036 04:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.036 04:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.036 04:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:54.036 04:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.036 04:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.036 04:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.036 04:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.036 04:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.036 04:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.295 04:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.295 04:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.295 04:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.295 04:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.295 04:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.295 04:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.295 "name": "Existed_Raid", 00:08:54.295 "uuid": "08a37062-aa65-4f22-988e-ef62c8985d1e", 00:08:54.295 "strip_size_kb": 64, 00:08:54.295 "state": "configuring", 00:08:54.295 "raid_level": "concat", 00:08:54.295 "superblock": true, 00:08:54.295 "num_base_bdevs": 3, 00:08:54.295 "num_base_bdevs_discovered": 1, 00:08:54.295 "num_base_bdevs_operational": 3, 00:08:54.295 "base_bdevs_list": [ 00:08:54.295 { 00:08:54.295 "name": "BaseBdev1", 00:08:54.295 "uuid": "8540becc-b47a-446f-b35d-22eda48f2882", 00:08:54.295 "is_configured": true, 00:08:54.295 "data_offset": 2048, 00:08:54.295 "data_size": 63488 00:08:54.295 }, 00:08:54.295 { 00:08:54.295 "name": "BaseBdev2", 00:08:54.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.295 "is_configured": false, 00:08:54.295 "data_offset": 0, 00:08:54.295 "data_size": 0 00:08:54.295 }, 00:08:54.295 { 00:08:54.295 "name": "BaseBdev3", 00:08:54.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.295 "is_configured": false, 00:08:54.295 "data_offset": 0, 00:08:54.295 "data_size": 0 00:08:54.295 } 00:08:54.295 ] 00:08:54.295 }' 00:08:54.295 04:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.295 04:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.555 04:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:54.555 04:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.555 04:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.555 [2024-12-13 04:24:54.526414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:54.555 BaseBdev2 00:08:54.555 04:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.555 04:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:54.555 04:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:54.555 04:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:54.555 04:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:54.555 04:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:54.555 04:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:54.555 04:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:54.555 04:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.555 04:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.555 04:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.555 04:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:54.555 04:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.555 04:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.555 [ 00:08:54.555 { 00:08:54.555 "name": "BaseBdev2", 00:08:54.555 "aliases": [ 00:08:54.555 "64041eca-2635-4461-b3d0-8c493c5f7b92" 00:08:54.555 ], 00:08:54.555 "product_name": "Malloc disk", 00:08:54.555 "block_size": 512, 00:08:54.555 "num_blocks": 65536, 00:08:54.556 "uuid": "64041eca-2635-4461-b3d0-8c493c5f7b92", 00:08:54.556 "assigned_rate_limits": { 00:08:54.556 "rw_ios_per_sec": 0, 00:08:54.556 "rw_mbytes_per_sec": 0, 00:08:54.556 "r_mbytes_per_sec": 0, 00:08:54.556 "w_mbytes_per_sec": 0 00:08:54.556 }, 00:08:54.556 "claimed": true, 00:08:54.556 "claim_type": "exclusive_write", 00:08:54.556 "zoned": false, 00:08:54.556 "supported_io_types": { 00:08:54.556 "read": true, 00:08:54.556 "write": true, 00:08:54.556 "unmap": true, 00:08:54.556 "flush": true, 00:08:54.556 "reset": true, 00:08:54.556 "nvme_admin": false, 00:08:54.556 "nvme_io": false, 00:08:54.556 "nvme_io_md": false, 00:08:54.556 "write_zeroes": true, 00:08:54.556 "zcopy": true, 00:08:54.556 "get_zone_info": false, 00:08:54.556 "zone_management": false, 00:08:54.556 "zone_append": false, 00:08:54.556 "compare": false, 00:08:54.556 "compare_and_write": false, 00:08:54.556 "abort": true, 00:08:54.556 "seek_hole": false, 00:08:54.556 "seek_data": false, 00:08:54.556 "copy": true, 00:08:54.556 "nvme_iov_md": false 00:08:54.556 }, 00:08:54.556 "memory_domains": [ 00:08:54.556 { 00:08:54.556 "dma_device_id": "system", 00:08:54.556 "dma_device_type": 1 00:08:54.556 }, 00:08:54.556 { 00:08:54.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.556 "dma_device_type": 2 00:08:54.556 } 00:08:54.556 ], 00:08:54.556 "driver_specific": {} 00:08:54.556 } 00:08:54.556 ] 00:08:54.556 04:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.556 04:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:54.556 04:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:54.556 04:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:54.556 04:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:54.556 04:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.556 04:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.556 04:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:54.556 04:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.556 04:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.556 04:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.556 04:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.556 04:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.556 04:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.815 04:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.815 04:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.815 04:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.815 04:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.815 04:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.815 04:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.815 "name": "Existed_Raid", 00:08:54.815 "uuid": "08a37062-aa65-4f22-988e-ef62c8985d1e", 00:08:54.815 "strip_size_kb": 64, 00:08:54.815 "state": "configuring", 00:08:54.815 "raid_level": "concat", 00:08:54.815 "superblock": true, 00:08:54.815 "num_base_bdevs": 3, 00:08:54.815 "num_base_bdevs_discovered": 2, 00:08:54.815 "num_base_bdevs_operational": 3, 00:08:54.815 "base_bdevs_list": [ 00:08:54.815 { 00:08:54.815 "name": "BaseBdev1", 00:08:54.815 "uuid": "8540becc-b47a-446f-b35d-22eda48f2882", 00:08:54.815 "is_configured": true, 00:08:54.815 "data_offset": 2048, 00:08:54.815 "data_size": 63488 00:08:54.815 }, 00:08:54.815 { 00:08:54.815 "name": "BaseBdev2", 00:08:54.815 "uuid": "64041eca-2635-4461-b3d0-8c493c5f7b92", 00:08:54.815 "is_configured": true, 00:08:54.815 "data_offset": 2048, 00:08:54.815 "data_size": 63488 00:08:54.815 }, 00:08:54.815 { 00:08:54.815 "name": "BaseBdev3", 00:08:54.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.815 "is_configured": false, 00:08:54.815 "data_offset": 0, 00:08:54.815 "data_size": 0 00:08:54.815 } 00:08:54.815 ] 00:08:54.815 }' 00:08:54.815 04:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.815 04:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.074 04:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:55.074 04:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.074 04:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.074 BaseBdev3 00:08:55.074 [2024-12-13 04:24:55.005734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:55.074 [2024-12-13 04:24:55.005982] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:55.074 [2024-12-13 04:24:55.006007] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:55.074 [2024-12-13 04:24:55.006361] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:55.074 [2024-12-13 04:24:55.006547] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:55.074 [2024-12-13 04:24:55.006564] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:08:55.074 [2024-12-13 04:24:55.006721] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:55.074 04:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.074 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:55.074 04:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:55.074 04:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:55.074 04:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:55.074 04:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:55.074 04:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:55.074 04:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:55.074 04:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.074 04:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.074 04:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.074 04:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:55.074 04:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.074 04:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.074 [ 00:08:55.074 { 00:08:55.074 "name": "BaseBdev3", 00:08:55.074 "aliases": [ 00:08:55.074 "6b109580-0fdd-48ea-a823-1cf6f22efe62" 00:08:55.074 ], 00:08:55.074 "product_name": "Malloc disk", 00:08:55.074 "block_size": 512, 00:08:55.074 "num_blocks": 65536, 00:08:55.074 "uuid": "6b109580-0fdd-48ea-a823-1cf6f22efe62", 00:08:55.074 "assigned_rate_limits": { 00:08:55.074 "rw_ios_per_sec": 0, 00:08:55.074 "rw_mbytes_per_sec": 0, 00:08:55.074 "r_mbytes_per_sec": 0, 00:08:55.074 "w_mbytes_per_sec": 0 00:08:55.074 }, 00:08:55.074 "claimed": true, 00:08:55.074 "claim_type": "exclusive_write", 00:08:55.074 "zoned": false, 00:08:55.074 "supported_io_types": { 00:08:55.074 "read": true, 00:08:55.074 "write": true, 00:08:55.074 "unmap": true, 00:08:55.074 "flush": true, 00:08:55.074 "reset": true, 00:08:55.074 "nvme_admin": false, 00:08:55.074 "nvme_io": false, 00:08:55.074 "nvme_io_md": false, 00:08:55.074 "write_zeroes": true, 00:08:55.074 "zcopy": true, 00:08:55.074 "get_zone_info": false, 00:08:55.074 "zone_management": false, 00:08:55.074 "zone_append": false, 00:08:55.074 "compare": false, 00:08:55.074 "compare_and_write": false, 00:08:55.074 "abort": true, 00:08:55.074 "seek_hole": false, 00:08:55.074 "seek_data": false, 00:08:55.074 "copy": true, 00:08:55.074 "nvme_iov_md": false 00:08:55.074 }, 00:08:55.074 "memory_domains": [ 00:08:55.074 { 00:08:55.074 "dma_device_id": "system", 00:08:55.074 "dma_device_type": 1 00:08:55.074 }, 00:08:55.074 { 00:08:55.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.074 "dma_device_type": 2 00:08:55.074 } 00:08:55.074 ], 00:08:55.074 "driver_specific": {} 00:08:55.074 } 00:08:55.074 ] 00:08:55.074 04:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.074 04:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:55.074 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:55.074 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:55.074 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:55.074 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.074 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:55.074 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:55.074 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.074 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.074 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.074 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.074 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.074 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.074 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.074 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.074 04:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.074 04:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.074 04:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.333 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.333 "name": "Existed_Raid", 00:08:55.333 "uuid": "08a37062-aa65-4f22-988e-ef62c8985d1e", 00:08:55.333 "strip_size_kb": 64, 00:08:55.333 "state": "online", 00:08:55.333 "raid_level": "concat", 00:08:55.333 "superblock": true, 00:08:55.333 "num_base_bdevs": 3, 00:08:55.333 "num_base_bdevs_discovered": 3, 00:08:55.333 "num_base_bdevs_operational": 3, 00:08:55.333 "base_bdevs_list": [ 00:08:55.333 { 00:08:55.333 "name": "BaseBdev1", 00:08:55.333 "uuid": "8540becc-b47a-446f-b35d-22eda48f2882", 00:08:55.333 "is_configured": true, 00:08:55.333 "data_offset": 2048, 00:08:55.333 "data_size": 63488 00:08:55.333 }, 00:08:55.333 { 00:08:55.333 "name": "BaseBdev2", 00:08:55.333 "uuid": "64041eca-2635-4461-b3d0-8c493c5f7b92", 00:08:55.333 "is_configured": true, 00:08:55.333 "data_offset": 2048, 00:08:55.333 "data_size": 63488 00:08:55.333 }, 00:08:55.333 { 00:08:55.333 "name": "BaseBdev3", 00:08:55.333 "uuid": "6b109580-0fdd-48ea-a823-1cf6f22efe62", 00:08:55.333 "is_configured": true, 00:08:55.333 "data_offset": 2048, 00:08:55.333 "data_size": 63488 00:08:55.333 } 00:08:55.333 ] 00:08:55.333 }' 00:08:55.333 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.333 04:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.593 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:55.593 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:55.593 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:55.593 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:55.593 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:55.593 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:55.593 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:55.593 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:55.593 04:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.593 04:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.593 [2024-12-13 04:24:55.513216] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:55.593 04:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.593 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:55.593 "name": "Existed_Raid", 00:08:55.593 "aliases": [ 00:08:55.593 "08a37062-aa65-4f22-988e-ef62c8985d1e" 00:08:55.593 ], 00:08:55.593 "product_name": "Raid Volume", 00:08:55.593 "block_size": 512, 00:08:55.593 "num_blocks": 190464, 00:08:55.593 "uuid": "08a37062-aa65-4f22-988e-ef62c8985d1e", 00:08:55.593 "assigned_rate_limits": { 00:08:55.593 "rw_ios_per_sec": 0, 00:08:55.593 "rw_mbytes_per_sec": 0, 00:08:55.593 "r_mbytes_per_sec": 0, 00:08:55.593 "w_mbytes_per_sec": 0 00:08:55.593 }, 00:08:55.593 "claimed": false, 00:08:55.593 "zoned": false, 00:08:55.593 "supported_io_types": { 00:08:55.593 "read": true, 00:08:55.593 "write": true, 00:08:55.593 "unmap": true, 00:08:55.593 "flush": true, 00:08:55.593 "reset": true, 00:08:55.593 "nvme_admin": false, 00:08:55.593 "nvme_io": false, 00:08:55.593 "nvme_io_md": false, 00:08:55.593 "write_zeroes": true, 00:08:55.593 "zcopy": false, 00:08:55.593 "get_zone_info": false, 00:08:55.593 "zone_management": false, 00:08:55.594 "zone_append": false, 00:08:55.594 "compare": false, 00:08:55.594 "compare_and_write": false, 00:08:55.594 "abort": false, 00:08:55.594 "seek_hole": false, 00:08:55.594 "seek_data": false, 00:08:55.594 "copy": false, 00:08:55.594 "nvme_iov_md": false 00:08:55.594 }, 00:08:55.594 "memory_domains": [ 00:08:55.594 { 00:08:55.594 "dma_device_id": "system", 00:08:55.594 "dma_device_type": 1 00:08:55.594 }, 00:08:55.594 { 00:08:55.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.594 "dma_device_type": 2 00:08:55.594 }, 00:08:55.594 { 00:08:55.594 "dma_device_id": "system", 00:08:55.594 "dma_device_type": 1 00:08:55.594 }, 00:08:55.594 { 00:08:55.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.594 "dma_device_type": 2 00:08:55.594 }, 00:08:55.594 { 00:08:55.594 "dma_device_id": "system", 00:08:55.594 "dma_device_type": 1 00:08:55.594 }, 00:08:55.594 { 00:08:55.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.594 "dma_device_type": 2 00:08:55.594 } 00:08:55.594 ], 00:08:55.594 "driver_specific": { 00:08:55.594 "raid": { 00:08:55.594 "uuid": "08a37062-aa65-4f22-988e-ef62c8985d1e", 00:08:55.594 "strip_size_kb": 64, 00:08:55.594 "state": "online", 00:08:55.594 "raid_level": "concat", 00:08:55.594 "superblock": true, 00:08:55.594 "num_base_bdevs": 3, 00:08:55.594 "num_base_bdevs_discovered": 3, 00:08:55.594 "num_base_bdevs_operational": 3, 00:08:55.594 "base_bdevs_list": [ 00:08:55.594 { 00:08:55.594 "name": "BaseBdev1", 00:08:55.594 "uuid": "8540becc-b47a-446f-b35d-22eda48f2882", 00:08:55.594 "is_configured": true, 00:08:55.594 "data_offset": 2048, 00:08:55.594 "data_size": 63488 00:08:55.594 }, 00:08:55.594 { 00:08:55.594 "name": "BaseBdev2", 00:08:55.594 "uuid": "64041eca-2635-4461-b3d0-8c493c5f7b92", 00:08:55.594 "is_configured": true, 00:08:55.594 "data_offset": 2048, 00:08:55.594 "data_size": 63488 00:08:55.594 }, 00:08:55.594 { 00:08:55.594 "name": "BaseBdev3", 00:08:55.594 "uuid": "6b109580-0fdd-48ea-a823-1cf6f22efe62", 00:08:55.594 "is_configured": true, 00:08:55.594 "data_offset": 2048, 00:08:55.594 "data_size": 63488 00:08:55.594 } 00:08:55.594 ] 00:08:55.594 } 00:08:55.594 } 00:08:55.594 }' 00:08:55.594 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:55.594 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:55.594 BaseBdev2 00:08:55.594 BaseBdev3' 00:08:55.594 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.853 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:55.853 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.853 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:55.853 04:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.853 04:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.853 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.853 04:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.853 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.854 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.854 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.854 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:55.854 04:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.854 04:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.854 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.854 04:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.854 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.854 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.854 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.854 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:55.854 04:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.854 04:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.854 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.854 04:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.854 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.854 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.854 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:55.854 04:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.854 04:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.854 [2024-12-13 04:24:55.800484] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:55.854 [2024-12-13 04:24:55.800521] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:55.854 [2024-12-13 04:24:55.800606] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:55.854 04:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.854 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:55.854 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:55.854 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:55.854 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:55.854 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:55.854 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:55.854 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.854 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:55.854 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:55.854 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.854 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:55.854 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.854 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.854 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.854 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.854 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.854 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.854 04:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.854 04:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.854 04:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.113 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.113 "name": "Existed_Raid", 00:08:56.113 "uuid": "08a37062-aa65-4f22-988e-ef62c8985d1e", 00:08:56.113 "strip_size_kb": 64, 00:08:56.113 "state": "offline", 00:08:56.113 "raid_level": "concat", 00:08:56.113 "superblock": true, 00:08:56.113 "num_base_bdevs": 3, 00:08:56.113 "num_base_bdevs_discovered": 2, 00:08:56.113 "num_base_bdevs_operational": 2, 00:08:56.113 "base_bdevs_list": [ 00:08:56.113 { 00:08:56.113 "name": null, 00:08:56.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.113 "is_configured": false, 00:08:56.113 "data_offset": 0, 00:08:56.113 "data_size": 63488 00:08:56.113 }, 00:08:56.113 { 00:08:56.113 "name": "BaseBdev2", 00:08:56.113 "uuid": "64041eca-2635-4461-b3d0-8c493c5f7b92", 00:08:56.113 "is_configured": true, 00:08:56.113 "data_offset": 2048, 00:08:56.113 "data_size": 63488 00:08:56.113 }, 00:08:56.113 { 00:08:56.113 "name": "BaseBdev3", 00:08:56.113 "uuid": "6b109580-0fdd-48ea-a823-1cf6f22efe62", 00:08:56.113 "is_configured": true, 00:08:56.113 "data_offset": 2048, 00:08:56.113 "data_size": 63488 00:08:56.113 } 00:08:56.113 ] 00:08:56.113 }' 00:08:56.113 04:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.113 04:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.373 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:56.373 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:56.373 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.373 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.373 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.373 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:56.373 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.373 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:56.373 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:56.373 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:56.373 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.373 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.373 [2024-12-13 04:24:56.292223] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:56.373 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.373 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:56.373 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:56.373 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.373 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:56.373 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.373 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.373 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.373 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:56.373 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:56.373 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:56.373 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.373 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.373 [2024-12-13 04:24:56.372656] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:56.373 [2024-12-13 04:24:56.372775] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:08:56.633 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.633 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:56.633 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:56.633 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.633 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:56.633 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.633 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.633 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.633 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:56.633 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:56.633 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:56.633 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:56.633 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:56.633 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:56.633 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.633 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.633 BaseBdev2 00:08:56.633 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.633 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:56.633 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:56.633 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:56.633 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:56.633 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:56.633 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:56.633 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:56.633 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.633 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.633 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.633 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:56.633 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.634 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.634 [ 00:08:56.634 { 00:08:56.634 "name": "BaseBdev2", 00:08:56.634 "aliases": [ 00:08:56.634 "56cd24a7-8186-4d94-9825-eaebc9145f4c" 00:08:56.634 ], 00:08:56.634 "product_name": "Malloc disk", 00:08:56.634 "block_size": 512, 00:08:56.634 "num_blocks": 65536, 00:08:56.634 "uuid": "56cd24a7-8186-4d94-9825-eaebc9145f4c", 00:08:56.634 "assigned_rate_limits": { 00:08:56.634 "rw_ios_per_sec": 0, 00:08:56.634 "rw_mbytes_per_sec": 0, 00:08:56.634 "r_mbytes_per_sec": 0, 00:08:56.634 "w_mbytes_per_sec": 0 00:08:56.634 }, 00:08:56.634 "claimed": false, 00:08:56.634 "zoned": false, 00:08:56.634 "supported_io_types": { 00:08:56.634 "read": true, 00:08:56.634 "write": true, 00:08:56.634 "unmap": true, 00:08:56.634 "flush": true, 00:08:56.634 "reset": true, 00:08:56.634 "nvme_admin": false, 00:08:56.634 "nvme_io": false, 00:08:56.634 "nvme_io_md": false, 00:08:56.634 "write_zeroes": true, 00:08:56.634 "zcopy": true, 00:08:56.634 "get_zone_info": false, 00:08:56.634 "zone_management": false, 00:08:56.634 "zone_append": false, 00:08:56.634 "compare": false, 00:08:56.634 "compare_and_write": false, 00:08:56.634 "abort": true, 00:08:56.634 "seek_hole": false, 00:08:56.634 "seek_data": false, 00:08:56.634 "copy": true, 00:08:56.634 "nvme_iov_md": false 00:08:56.634 }, 00:08:56.634 "memory_domains": [ 00:08:56.634 { 00:08:56.634 "dma_device_id": "system", 00:08:56.634 "dma_device_type": 1 00:08:56.634 }, 00:08:56.634 { 00:08:56.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.634 "dma_device_type": 2 00:08:56.634 } 00:08:56.634 ], 00:08:56.634 "driver_specific": {} 00:08:56.634 } 00:08:56.634 ] 00:08:56.634 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.634 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:56.634 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:56.634 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:56.634 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:56.634 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.634 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.634 BaseBdev3 00:08:56.634 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.634 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:56.634 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:56.634 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:56.634 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:56.634 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:56.634 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:56.634 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:56.634 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.634 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.634 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.634 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:56.634 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.634 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.634 [ 00:08:56.634 { 00:08:56.634 "name": "BaseBdev3", 00:08:56.634 "aliases": [ 00:08:56.634 "640d0838-27ba-4ddb-8090-50e6f10bc696" 00:08:56.634 ], 00:08:56.634 "product_name": "Malloc disk", 00:08:56.634 "block_size": 512, 00:08:56.634 "num_blocks": 65536, 00:08:56.634 "uuid": "640d0838-27ba-4ddb-8090-50e6f10bc696", 00:08:56.634 "assigned_rate_limits": { 00:08:56.634 "rw_ios_per_sec": 0, 00:08:56.634 "rw_mbytes_per_sec": 0, 00:08:56.634 "r_mbytes_per_sec": 0, 00:08:56.634 "w_mbytes_per_sec": 0 00:08:56.634 }, 00:08:56.634 "claimed": false, 00:08:56.634 "zoned": false, 00:08:56.634 "supported_io_types": { 00:08:56.634 "read": true, 00:08:56.634 "write": true, 00:08:56.634 "unmap": true, 00:08:56.634 "flush": true, 00:08:56.634 "reset": true, 00:08:56.634 "nvme_admin": false, 00:08:56.634 "nvme_io": false, 00:08:56.634 "nvme_io_md": false, 00:08:56.634 "write_zeroes": true, 00:08:56.634 "zcopy": true, 00:08:56.634 "get_zone_info": false, 00:08:56.634 "zone_management": false, 00:08:56.634 "zone_append": false, 00:08:56.634 "compare": false, 00:08:56.634 "compare_and_write": false, 00:08:56.634 "abort": true, 00:08:56.634 "seek_hole": false, 00:08:56.634 "seek_data": false, 00:08:56.634 "copy": true, 00:08:56.634 "nvme_iov_md": false 00:08:56.634 }, 00:08:56.634 "memory_domains": [ 00:08:56.634 { 00:08:56.634 "dma_device_id": "system", 00:08:56.634 "dma_device_type": 1 00:08:56.634 }, 00:08:56.634 { 00:08:56.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.634 "dma_device_type": 2 00:08:56.634 } 00:08:56.634 ], 00:08:56.634 "driver_specific": {} 00:08:56.634 } 00:08:56.634 ] 00:08:56.634 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.634 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:56.634 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:56.634 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:56.634 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:56.634 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.634 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.634 [2024-12-13 04:24:56.568739] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:56.634 [2024-12-13 04:24:56.568794] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:56.634 [2024-12-13 04:24:56.568816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:56.634 [2024-12-13 04:24:56.570970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:56.634 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.634 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:56.634 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.634 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.634 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:56.634 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.634 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.634 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.634 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.634 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.634 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.634 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.634 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.634 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.634 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.634 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.634 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.634 "name": "Existed_Raid", 00:08:56.634 "uuid": "87a0ab63-509c-454b-8439-8096214e573b", 00:08:56.634 "strip_size_kb": 64, 00:08:56.634 "state": "configuring", 00:08:56.634 "raid_level": "concat", 00:08:56.634 "superblock": true, 00:08:56.634 "num_base_bdevs": 3, 00:08:56.634 "num_base_bdevs_discovered": 2, 00:08:56.634 "num_base_bdevs_operational": 3, 00:08:56.634 "base_bdevs_list": [ 00:08:56.634 { 00:08:56.634 "name": "BaseBdev1", 00:08:56.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.635 "is_configured": false, 00:08:56.635 "data_offset": 0, 00:08:56.635 "data_size": 0 00:08:56.635 }, 00:08:56.635 { 00:08:56.635 "name": "BaseBdev2", 00:08:56.635 "uuid": "56cd24a7-8186-4d94-9825-eaebc9145f4c", 00:08:56.635 "is_configured": true, 00:08:56.635 "data_offset": 2048, 00:08:56.635 "data_size": 63488 00:08:56.635 }, 00:08:56.635 { 00:08:56.635 "name": "BaseBdev3", 00:08:56.635 "uuid": "640d0838-27ba-4ddb-8090-50e6f10bc696", 00:08:56.635 "is_configured": true, 00:08:56.635 "data_offset": 2048, 00:08:56.635 "data_size": 63488 00:08:56.635 } 00:08:56.635 ] 00:08:56.635 }' 00:08:56.635 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.635 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.203 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:57.203 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.203 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.203 [2024-12-13 04:24:56.984049] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:57.203 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.203 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:57.203 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.203 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.203 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:57.203 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.203 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.203 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.203 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.203 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.203 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.203 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.203 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.203 04:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.203 04:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.203 04:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.203 04:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.203 "name": "Existed_Raid", 00:08:57.203 "uuid": "87a0ab63-509c-454b-8439-8096214e573b", 00:08:57.203 "strip_size_kb": 64, 00:08:57.203 "state": "configuring", 00:08:57.203 "raid_level": "concat", 00:08:57.203 "superblock": true, 00:08:57.203 "num_base_bdevs": 3, 00:08:57.203 "num_base_bdevs_discovered": 1, 00:08:57.203 "num_base_bdevs_operational": 3, 00:08:57.203 "base_bdevs_list": [ 00:08:57.203 { 00:08:57.203 "name": "BaseBdev1", 00:08:57.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.203 "is_configured": false, 00:08:57.203 "data_offset": 0, 00:08:57.203 "data_size": 0 00:08:57.203 }, 00:08:57.203 { 00:08:57.203 "name": null, 00:08:57.203 "uuid": "56cd24a7-8186-4d94-9825-eaebc9145f4c", 00:08:57.203 "is_configured": false, 00:08:57.203 "data_offset": 0, 00:08:57.203 "data_size": 63488 00:08:57.203 }, 00:08:57.203 { 00:08:57.203 "name": "BaseBdev3", 00:08:57.203 "uuid": "640d0838-27ba-4ddb-8090-50e6f10bc696", 00:08:57.203 "is_configured": true, 00:08:57.203 "data_offset": 2048, 00:08:57.203 "data_size": 63488 00:08:57.203 } 00:08:57.203 ] 00:08:57.203 }' 00:08:57.203 04:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.203 04:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.778 04:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.778 04:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:57.778 04:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.778 04:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.778 04:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.778 04:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:57.778 04:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:57.778 04:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.778 04:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.778 [2024-12-13 04:24:57.555848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:57.778 BaseBdev1 00:08:57.778 04:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.778 04:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:57.778 04:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:57.778 04:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:57.778 04:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:57.778 04:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:57.778 04:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:57.778 04:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:57.778 04:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.778 04:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.778 04:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.778 04:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:57.778 04:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.778 04:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.778 [ 00:08:57.778 { 00:08:57.778 "name": "BaseBdev1", 00:08:57.778 "aliases": [ 00:08:57.778 "a2c0cf46-e2d6-42dd-81e8-acbf047630d7" 00:08:57.778 ], 00:08:57.778 "product_name": "Malloc disk", 00:08:57.778 "block_size": 512, 00:08:57.778 "num_blocks": 65536, 00:08:57.778 "uuid": "a2c0cf46-e2d6-42dd-81e8-acbf047630d7", 00:08:57.778 "assigned_rate_limits": { 00:08:57.778 "rw_ios_per_sec": 0, 00:08:57.778 "rw_mbytes_per_sec": 0, 00:08:57.778 "r_mbytes_per_sec": 0, 00:08:57.778 "w_mbytes_per_sec": 0 00:08:57.778 }, 00:08:57.778 "claimed": true, 00:08:57.778 "claim_type": "exclusive_write", 00:08:57.778 "zoned": false, 00:08:57.778 "supported_io_types": { 00:08:57.778 "read": true, 00:08:57.778 "write": true, 00:08:57.778 "unmap": true, 00:08:57.778 "flush": true, 00:08:57.778 "reset": true, 00:08:57.778 "nvme_admin": false, 00:08:57.778 "nvme_io": false, 00:08:57.778 "nvme_io_md": false, 00:08:57.778 "write_zeroes": true, 00:08:57.778 "zcopy": true, 00:08:57.778 "get_zone_info": false, 00:08:57.778 "zone_management": false, 00:08:57.778 "zone_append": false, 00:08:57.778 "compare": false, 00:08:57.778 "compare_and_write": false, 00:08:57.778 "abort": true, 00:08:57.778 "seek_hole": false, 00:08:57.778 "seek_data": false, 00:08:57.778 "copy": true, 00:08:57.778 "nvme_iov_md": false 00:08:57.778 }, 00:08:57.778 "memory_domains": [ 00:08:57.778 { 00:08:57.778 "dma_device_id": "system", 00:08:57.778 "dma_device_type": 1 00:08:57.778 }, 00:08:57.778 { 00:08:57.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.778 "dma_device_type": 2 00:08:57.778 } 00:08:57.778 ], 00:08:57.778 "driver_specific": {} 00:08:57.778 } 00:08:57.778 ] 00:08:57.778 04:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.778 04:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:57.778 04:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:57.778 04:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.778 04:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.778 04:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:57.778 04:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.778 04:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.778 04:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.778 04:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.778 04:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.778 04:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.778 04:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.778 04:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.778 04:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.778 04:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.778 04:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.778 04:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.778 "name": "Existed_Raid", 00:08:57.778 "uuid": "87a0ab63-509c-454b-8439-8096214e573b", 00:08:57.778 "strip_size_kb": 64, 00:08:57.778 "state": "configuring", 00:08:57.778 "raid_level": "concat", 00:08:57.778 "superblock": true, 00:08:57.778 "num_base_bdevs": 3, 00:08:57.778 "num_base_bdevs_discovered": 2, 00:08:57.778 "num_base_bdevs_operational": 3, 00:08:57.778 "base_bdevs_list": [ 00:08:57.778 { 00:08:57.778 "name": "BaseBdev1", 00:08:57.778 "uuid": "a2c0cf46-e2d6-42dd-81e8-acbf047630d7", 00:08:57.778 "is_configured": true, 00:08:57.778 "data_offset": 2048, 00:08:57.778 "data_size": 63488 00:08:57.778 }, 00:08:57.778 { 00:08:57.778 "name": null, 00:08:57.778 "uuid": "56cd24a7-8186-4d94-9825-eaebc9145f4c", 00:08:57.778 "is_configured": false, 00:08:57.778 "data_offset": 0, 00:08:57.778 "data_size": 63488 00:08:57.778 }, 00:08:57.778 { 00:08:57.778 "name": "BaseBdev3", 00:08:57.778 "uuid": "640d0838-27ba-4ddb-8090-50e6f10bc696", 00:08:57.778 "is_configured": true, 00:08:57.778 "data_offset": 2048, 00:08:57.778 "data_size": 63488 00:08:57.778 } 00:08:57.778 ] 00:08:57.778 }' 00:08:57.778 04:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.778 04:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.059 04:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.059 04:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.059 04:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.059 04:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:58.059 04:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.336 04:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:58.336 04:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:58.336 04:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.336 04:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.336 [2024-12-13 04:24:58.095022] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:58.336 04:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.336 04:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:58.336 04:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.336 04:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.336 04:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:58.336 04:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.336 04:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.336 04:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.336 04:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.336 04:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.336 04:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.336 04:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.336 04:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.336 04:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.336 04:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.336 04:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.336 04:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.336 "name": "Existed_Raid", 00:08:58.336 "uuid": "87a0ab63-509c-454b-8439-8096214e573b", 00:08:58.336 "strip_size_kb": 64, 00:08:58.336 "state": "configuring", 00:08:58.336 "raid_level": "concat", 00:08:58.336 "superblock": true, 00:08:58.336 "num_base_bdevs": 3, 00:08:58.336 "num_base_bdevs_discovered": 1, 00:08:58.336 "num_base_bdevs_operational": 3, 00:08:58.336 "base_bdevs_list": [ 00:08:58.336 { 00:08:58.336 "name": "BaseBdev1", 00:08:58.336 "uuid": "a2c0cf46-e2d6-42dd-81e8-acbf047630d7", 00:08:58.336 "is_configured": true, 00:08:58.336 "data_offset": 2048, 00:08:58.336 "data_size": 63488 00:08:58.336 }, 00:08:58.336 { 00:08:58.336 "name": null, 00:08:58.336 "uuid": "56cd24a7-8186-4d94-9825-eaebc9145f4c", 00:08:58.336 "is_configured": false, 00:08:58.336 "data_offset": 0, 00:08:58.336 "data_size": 63488 00:08:58.336 }, 00:08:58.336 { 00:08:58.336 "name": null, 00:08:58.336 "uuid": "640d0838-27ba-4ddb-8090-50e6f10bc696", 00:08:58.336 "is_configured": false, 00:08:58.336 "data_offset": 0, 00:08:58.336 "data_size": 63488 00:08:58.337 } 00:08:58.337 ] 00:08:58.337 }' 00:08:58.337 04:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.337 04:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.595 04:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.595 04:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:58.595 04:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.595 04:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.595 04:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.854 04:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:58.854 04:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:58.854 04:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.854 04:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.854 [2024-12-13 04:24:58.626138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:58.854 04:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.854 04:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:58.854 04:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.854 04:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.854 04:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:58.854 04:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.854 04:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.854 04:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.854 04:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.854 04:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.854 04:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.854 04:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.854 04:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.854 04:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.854 04:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.854 04:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.854 04:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.854 "name": "Existed_Raid", 00:08:58.854 "uuid": "87a0ab63-509c-454b-8439-8096214e573b", 00:08:58.854 "strip_size_kb": 64, 00:08:58.854 "state": "configuring", 00:08:58.854 "raid_level": "concat", 00:08:58.854 "superblock": true, 00:08:58.854 "num_base_bdevs": 3, 00:08:58.854 "num_base_bdevs_discovered": 2, 00:08:58.854 "num_base_bdevs_operational": 3, 00:08:58.854 "base_bdevs_list": [ 00:08:58.854 { 00:08:58.854 "name": "BaseBdev1", 00:08:58.854 "uuid": "a2c0cf46-e2d6-42dd-81e8-acbf047630d7", 00:08:58.854 "is_configured": true, 00:08:58.854 "data_offset": 2048, 00:08:58.854 "data_size": 63488 00:08:58.854 }, 00:08:58.854 { 00:08:58.854 "name": null, 00:08:58.854 "uuid": "56cd24a7-8186-4d94-9825-eaebc9145f4c", 00:08:58.854 "is_configured": false, 00:08:58.854 "data_offset": 0, 00:08:58.854 "data_size": 63488 00:08:58.854 }, 00:08:58.854 { 00:08:58.854 "name": "BaseBdev3", 00:08:58.854 "uuid": "640d0838-27ba-4ddb-8090-50e6f10bc696", 00:08:58.854 "is_configured": true, 00:08:58.854 "data_offset": 2048, 00:08:58.854 "data_size": 63488 00:08:58.855 } 00:08:58.855 ] 00:08:58.855 }' 00:08:58.855 04:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.855 04:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.114 04:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:59.114 04:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.114 04:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.114 04:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.114 04:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.114 04:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:59.114 04:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:59.114 04:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.114 04:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.114 [2024-12-13 04:24:59.125327] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:59.373 04:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.373 04:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:59.373 04:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.373 04:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.373 04:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:59.373 04:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.373 04:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.373 04:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.373 04:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.373 04:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.373 04:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.373 04:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.373 04:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.373 04:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.373 04:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.373 04:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.373 04:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.373 "name": "Existed_Raid", 00:08:59.373 "uuid": "87a0ab63-509c-454b-8439-8096214e573b", 00:08:59.373 "strip_size_kb": 64, 00:08:59.373 "state": "configuring", 00:08:59.373 "raid_level": "concat", 00:08:59.373 "superblock": true, 00:08:59.373 "num_base_bdevs": 3, 00:08:59.373 "num_base_bdevs_discovered": 1, 00:08:59.373 "num_base_bdevs_operational": 3, 00:08:59.373 "base_bdevs_list": [ 00:08:59.373 { 00:08:59.373 "name": null, 00:08:59.373 "uuid": "a2c0cf46-e2d6-42dd-81e8-acbf047630d7", 00:08:59.373 "is_configured": false, 00:08:59.373 "data_offset": 0, 00:08:59.373 "data_size": 63488 00:08:59.373 }, 00:08:59.373 { 00:08:59.373 "name": null, 00:08:59.373 "uuid": "56cd24a7-8186-4d94-9825-eaebc9145f4c", 00:08:59.373 "is_configured": false, 00:08:59.373 "data_offset": 0, 00:08:59.373 "data_size": 63488 00:08:59.373 }, 00:08:59.373 { 00:08:59.373 "name": "BaseBdev3", 00:08:59.373 "uuid": "640d0838-27ba-4ddb-8090-50e6f10bc696", 00:08:59.373 "is_configured": true, 00:08:59.373 "data_offset": 2048, 00:08:59.373 "data_size": 63488 00:08:59.373 } 00:08:59.373 ] 00:08:59.373 }' 00:08:59.373 04:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.373 04:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.632 04:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.632 04:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.632 04:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.632 04:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:59.632 04:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.632 04:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:59.632 04:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:59.632 04:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.632 04:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.632 [2024-12-13 04:24:59.616196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:59.632 04:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.632 04:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:59.632 04:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.632 04:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.632 04:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:59.632 04:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.632 04:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.632 04:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.632 04:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.632 04:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.632 04:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.632 04:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.632 04:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.632 04:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.632 04:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.632 04:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.891 04:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.891 "name": "Existed_Raid", 00:08:59.891 "uuid": "87a0ab63-509c-454b-8439-8096214e573b", 00:08:59.891 "strip_size_kb": 64, 00:08:59.891 "state": "configuring", 00:08:59.891 "raid_level": "concat", 00:08:59.891 "superblock": true, 00:08:59.891 "num_base_bdevs": 3, 00:08:59.891 "num_base_bdevs_discovered": 2, 00:08:59.891 "num_base_bdevs_operational": 3, 00:08:59.891 "base_bdevs_list": [ 00:08:59.891 { 00:08:59.891 "name": null, 00:08:59.891 "uuid": "a2c0cf46-e2d6-42dd-81e8-acbf047630d7", 00:08:59.891 "is_configured": false, 00:08:59.891 "data_offset": 0, 00:08:59.891 "data_size": 63488 00:08:59.891 }, 00:08:59.891 { 00:08:59.891 "name": "BaseBdev2", 00:08:59.891 "uuid": "56cd24a7-8186-4d94-9825-eaebc9145f4c", 00:08:59.891 "is_configured": true, 00:08:59.891 "data_offset": 2048, 00:08:59.891 "data_size": 63488 00:08:59.891 }, 00:08:59.891 { 00:08:59.891 "name": "BaseBdev3", 00:08:59.891 "uuid": "640d0838-27ba-4ddb-8090-50e6f10bc696", 00:08:59.891 "is_configured": true, 00:08:59.891 "data_offset": 2048, 00:08:59.891 "data_size": 63488 00:08:59.891 } 00:08:59.891 ] 00:08:59.891 }' 00:08:59.891 04:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.891 04:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.150 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.150 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:00.150 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.150 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.150 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.150 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:00.150 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:00.150 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.150 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.150 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.150 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.150 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a2c0cf46-e2d6-42dd-81e8-acbf047630d7 00:09:00.150 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.150 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.410 [2024-12-13 04:25:00.176047] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:00.410 [2024-12-13 04:25:00.176356] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:00.410 [2024-12-13 04:25:00.176410] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:00.410 [2024-12-13 04:25:00.176725] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:09:00.410 NewBaseBdev 00:09:00.410 [2024-12-13 04:25:00.176904] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:00.410 [2024-12-13 04:25:00.176916] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:09:00.410 [2024-12-13 04:25:00.177032] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:00.410 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.410 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:00.410 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:00.410 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:00.410 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:00.410 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:00.410 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:00.410 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:00.410 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.410 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.410 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.410 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:00.410 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.410 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.410 [ 00:09:00.410 { 00:09:00.410 "name": "NewBaseBdev", 00:09:00.410 "aliases": [ 00:09:00.410 "a2c0cf46-e2d6-42dd-81e8-acbf047630d7" 00:09:00.410 ], 00:09:00.410 "product_name": "Malloc disk", 00:09:00.410 "block_size": 512, 00:09:00.410 "num_blocks": 65536, 00:09:00.410 "uuid": "a2c0cf46-e2d6-42dd-81e8-acbf047630d7", 00:09:00.410 "assigned_rate_limits": { 00:09:00.410 "rw_ios_per_sec": 0, 00:09:00.410 "rw_mbytes_per_sec": 0, 00:09:00.410 "r_mbytes_per_sec": 0, 00:09:00.410 "w_mbytes_per_sec": 0 00:09:00.410 }, 00:09:00.410 "claimed": true, 00:09:00.410 "claim_type": "exclusive_write", 00:09:00.410 "zoned": false, 00:09:00.410 "supported_io_types": { 00:09:00.410 "read": true, 00:09:00.410 "write": true, 00:09:00.410 "unmap": true, 00:09:00.410 "flush": true, 00:09:00.410 "reset": true, 00:09:00.410 "nvme_admin": false, 00:09:00.410 "nvme_io": false, 00:09:00.410 "nvme_io_md": false, 00:09:00.410 "write_zeroes": true, 00:09:00.410 "zcopy": true, 00:09:00.410 "get_zone_info": false, 00:09:00.410 "zone_management": false, 00:09:00.410 "zone_append": false, 00:09:00.410 "compare": false, 00:09:00.410 "compare_and_write": false, 00:09:00.410 "abort": true, 00:09:00.410 "seek_hole": false, 00:09:00.410 "seek_data": false, 00:09:00.410 "copy": true, 00:09:00.410 "nvme_iov_md": false 00:09:00.410 }, 00:09:00.410 "memory_domains": [ 00:09:00.410 { 00:09:00.410 "dma_device_id": "system", 00:09:00.410 "dma_device_type": 1 00:09:00.410 }, 00:09:00.410 { 00:09:00.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.410 "dma_device_type": 2 00:09:00.410 } 00:09:00.410 ], 00:09:00.410 "driver_specific": {} 00:09:00.410 } 00:09:00.410 ] 00:09:00.410 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.410 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:00.410 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:00.410 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.410 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:00.410 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:00.410 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.410 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.410 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.410 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.410 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.410 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.410 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.410 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.410 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.410 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.410 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.410 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.410 "name": "Existed_Raid", 00:09:00.410 "uuid": "87a0ab63-509c-454b-8439-8096214e573b", 00:09:00.410 "strip_size_kb": 64, 00:09:00.410 "state": "online", 00:09:00.410 "raid_level": "concat", 00:09:00.410 "superblock": true, 00:09:00.410 "num_base_bdevs": 3, 00:09:00.410 "num_base_bdevs_discovered": 3, 00:09:00.410 "num_base_bdevs_operational": 3, 00:09:00.410 "base_bdevs_list": [ 00:09:00.410 { 00:09:00.410 "name": "NewBaseBdev", 00:09:00.410 "uuid": "a2c0cf46-e2d6-42dd-81e8-acbf047630d7", 00:09:00.410 "is_configured": true, 00:09:00.410 "data_offset": 2048, 00:09:00.410 "data_size": 63488 00:09:00.410 }, 00:09:00.410 { 00:09:00.410 "name": "BaseBdev2", 00:09:00.410 "uuid": "56cd24a7-8186-4d94-9825-eaebc9145f4c", 00:09:00.410 "is_configured": true, 00:09:00.410 "data_offset": 2048, 00:09:00.410 "data_size": 63488 00:09:00.410 }, 00:09:00.410 { 00:09:00.410 "name": "BaseBdev3", 00:09:00.410 "uuid": "640d0838-27ba-4ddb-8090-50e6f10bc696", 00:09:00.410 "is_configured": true, 00:09:00.410 "data_offset": 2048, 00:09:00.410 "data_size": 63488 00:09:00.410 } 00:09:00.410 ] 00:09:00.410 }' 00:09:00.410 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.410 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.668 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:00.668 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:00.668 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:00.668 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:00.668 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:00.668 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:00.668 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:00.668 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:00.668 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.668 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.668 [2024-12-13 04:25:00.659651] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:00.668 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.927 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:00.927 "name": "Existed_Raid", 00:09:00.927 "aliases": [ 00:09:00.927 "87a0ab63-509c-454b-8439-8096214e573b" 00:09:00.927 ], 00:09:00.927 "product_name": "Raid Volume", 00:09:00.927 "block_size": 512, 00:09:00.927 "num_blocks": 190464, 00:09:00.927 "uuid": "87a0ab63-509c-454b-8439-8096214e573b", 00:09:00.927 "assigned_rate_limits": { 00:09:00.927 "rw_ios_per_sec": 0, 00:09:00.927 "rw_mbytes_per_sec": 0, 00:09:00.927 "r_mbytes_per_sec": 0, 00:09:00.927 "w_mbytes_per_sec": 0 00:09:00.927 }, 00:09:00.927 "claimed": false, 00:09:00.927 "zoned": false, 00:09:00.927 "supported_io_types": { 00:09:00.927 "read": true, 00:09:00.927 "write": true, 00:09:00.927 "unmap": true, 00:09:00.927 "flush": true, 00:09:00.927 "reset": true, 00:09:00.927 "nvme_admin": false, 00:09:00.927 "nvme_io": false, 00:09:00.927 "nvme_io_md": false, 00:09:00.927 "write_zeroes": true, 00:09:00.927 "zcopy": false, 00:09:00.927 "get_zone_info": false, 00:09:00.927 "zone_management": false, 00:09:00.927 "zone_append": false, 00:09:00.927 "compare": false, 00:09:00.927 "compare_and_write": false, 00:09:00.927 "abort": false, 00:09:00.927 "seek_hole": false, 00:09:00.927 "seek_data": false, 00:09:00.927 "copy": false, 00:09:00.927 "nvme_iov_md": false 00:09:00.927 }, 00:09:00.927 "memory_domains": [ 00:09:00.927 { 00:09:00.927 "dma_device_id": "system", 00:09:00.927 "dma_device_type": 1 00:09:00.927 }, 00:09:00.927 { 00:09:00.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.927 "dma_device_type": 2 00:09:00.927 }, 00:09:00.927 { 00:09:00.927 "dma_device_id": "system", 00:09:00.927 "dma_device_type": 1 00:09:00.927 }, 00:09:00.927 { 00:09:00.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.927 "dma_device_type": 2 00:09:00.927 }, 00:09:00.927 { 00:09:00.927 "dma_device_id": "system", 00:09:00.927 "dma_device_type": 1 00:09:00.927 }, 00:09:00.927 { 00:09:00.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.927 "dma_device_type": 2 00:09:00.927 } 00:09:00.927 ], 00:09:00.927 "driver_specific": { 00:09:00.927 "raid": { 00:09:00.927 "uuid": "87a0ab63-509c-454b-8439-8096214e573b", 00:09:00.927 "strip_size_kb": 64, 00:09:00.927 "state": "online", 00:09:00.927 "raid_level": "concat", 00:09:00.927 "superblock": true, 00:09:00.927 "num_base_bdevs": 3, 00:09:00.927 "num_base_bdevs_discovered": 3, 00:09:00.927 "num_base_bdevs_operational": 3, 00:09:00.927 "base_bdevs_list": [ 00:09:00.928 { 00:09:00.928 "name": "NewBaseBdev", 00:09:00.928 "uuid": "a2c0cf46-e2d6-42dd-81e8-acbf047630d7", 00:09:00.928 "is_configured": true, 00:09:00.928 "data_offset": 2048, 00:09:00.928 "data_size": 63488 00:09:00.928 }, 00:09:00.928 { 00:09:00.928 "name": "BaseBdev2", 00:09:00.928 "uuid": "56cd24a7-8186-4d94-9825-eaebc9145f4c", 00:09:00.928 "is_configured": true, 00:09:00.928 "data_offset": 2048, 00:09:00.928 "data_size": 63488 00:09:00.928 }, 00:09:00.928 { 00:09:00.928 "name": "BaseBdev3", 00:09:00.928 "uuid": "640d0838-27ba-4ddb-8090-50e6f10bc696", 00:09:00.928 "is_configured": true, 00:09:00.928 "data_offset": 2048, 00:09:00.928 "data_size": 63488 00:09:00.928 } 00:09:00.928 ] 00:09:00.928 } 00:09:00.928 } 00:09:00.928 }' 00:09:00.928 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:00.928 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:00.928 BaseBdev2 00:09:00.928 BaseBdev3' 00:09:00.928 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.928 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:00.928 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.928 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:00.928 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.928 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.928 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.928 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.928 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.928 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.928 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.928 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:00.928 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.928 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.928 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.928 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.928 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.928 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.928 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.928 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.928 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:00.928 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.928 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.928 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.928 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.928 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.928 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:00.928 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.928 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.928 [2024-12-13 04:25:00.898873] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:00.928 [2024-12-13 04:25:00.898905] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:00.928 [2024-12-13 04:25:00.898991] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:00.928 [2024-12-13 04:25:00.899054] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:00.928 [2024-12-13 04:25:00.899075] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:09:00.928 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.928 04:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 79020 00:09:00.928 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 79020 ']' 00:09:00.928 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 79020 00:09:00.928 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:00.928 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:00.928 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79020 00:09:01.187 killing process with pid 79020 00:09:01.187 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:01.187 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:01.187 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79020' 00:09:01.187 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 79020 00:09:01.187 [2024-12-13 04:25:00.950652] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:01.187 04:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 79020 00:09:01.187 [2024-12-13 04:25:01.009297] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:01.447 04:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:01.447 00:09:01.447 real 0m9.214s 00:09:01.447 user 0m15.515s 00:09:01.447 sys 0m1.916s 00:09:01.447 04:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.447 ************************************ 00:09:01.447 END TEST raid_state_function_test_sb 00:09:01.447 ************************************ 00:09:01.447 04:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.447 04:25:01 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:09:01.447 04:25:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:01.447 04:25:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:01.447 04:25:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:01.447 ************************************ 00:09:01.447 START TEST raid_superblock_test 00:09:01.447 ************************************ 00:09:01.447 04:25:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:09:01.447 04:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:01.447 04:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:01.447 04:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:01.447 04:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:01.447 04:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:01.447 04:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:01.447 04:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:01.447 04:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:01.447 04:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:01.447 04:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:01.447 04:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:01.447 04:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:01.447 04:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:01.447 04:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:01.447 04:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:01.447 04:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:01.447 04:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=79625 00:09:01.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.447 04:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 79625 00:09:01.447 04:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:01.447 04:25:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 79625 ']' 00:09:01.447 04:25:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.447 04:25:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:01.447 04:25:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.447 04:25:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:01.447 04:25:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.707 [2024-12-13 04:25:01.501954] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:01.707 [2024-12-13 04:25:01.502159] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79625 ] 00:09:01.707 [2024-12-13 04:25:01.635419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.707 [2024-12-13 04:25:01.674341] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.965 [2024-12-13 04:25:01.750199] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:01.965 [2024-12-13 04:25:01.750314] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:02.535 04:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:02.535 04:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:02.535 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:02.535 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:02.535 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:02.535 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:02.535 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:02.535 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:02.535 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:02.535 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:02.535 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:02.535 04:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.535 04:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.535 malloc1 00:09:02.535 04:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.535 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:02.535 04:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.535 04:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.535 [2024-12-13 04:25:02.351001] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:02.535 [2024-12-13 04:25:02.351070] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:02.535 [2024-12-13 04:25:02.351096] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:09:02.535 [2024-12-13 04:25:02.351114] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:02.535 [2024-12-13 04:25:02.353549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:02.535 [2024-12-13 04:25:02.353635] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:02.535 pt1 00:09:02.535 04:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.535 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:02.535 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:02.535 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:02.535 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:02.535 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:02.535 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:02.535 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:02.535 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:02.535 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:02.535 04:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.535 04:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.535 malloc2 00:09:02.535 04:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.535 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:02.535 04:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.535 04:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.535 [2024-12-13 04:25:02.385518] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:02.535 [2024-12-13 04:25:02.385642] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:02.535 [2024-12-13 04:25:02.385679] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:02.535 [2024-12-13 04:25:02.385709] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:02.535 [2024-12-13 04:25:02.388059] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:02.535 [2024-12-13 04:25:02.388129] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:02.535 pt2 00:09:02.535 04:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.535 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:02.535 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:02.535 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:02.535 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:02.535 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:02.535 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:02.535 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:02.535 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:02.535 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:02.535 04:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.535 04:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.536 malloc3 00:09:02.536 04:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.536 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:02.536 04:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.536 04:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.536 [2024-12-13 04:25:02.424057] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:02.536 [2024-12-13 04:25:02.424170] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:02.536 [2024-12-13 04:25:02.424211] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:02.536 [2024-12-13 04:25:02.424241] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:02.536 [2024-12-13 04:25:02.426655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:02.536 [2024-12-13 04:25:02.426727] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:02.536 pt3 00:09:02.536 04:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.536 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:02.536 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:02.536 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:02.536 04:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.536 04:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.536 [2024-12-13 04:25:02.436107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:02.536 [2024-12-13 04:25:02.438245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:02.536 [2024-12-13 04:25:02.438360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:02.536 [2024-12-13 04:25:02.438538] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:09:02.536 [2024-12-13 04:25:02.438554] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:02.536 [2024-12-13 04:25:02.438856] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:09:02.536 [2024-12-13 04:25:02.439001] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:09:02.536 [2024-12-13 04:25:02.439014] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:09:02.536 [2024-12-13 04:25:02.439143] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:02.536 04:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.536 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:02.536 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:02.536 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:02.536 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:02.536 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.536 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.536 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.536 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.536 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.536 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.536 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.536 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:02.536 04:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.536 04:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.536 04:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.536 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.536 "name": "raid_bdev1", 00:09:02.536 "uuid": "a2f5da52-33b5-4f0e-a436-0a73903ad57a", 00:09:02.536 "strip_size_kb": 64, 00:09:02.536 "state": "online", 00:09:02.536 "raid_level": "concat", 00:09:02.536 "superblock": true, 00:09:02.536 "num_base_bdevs": 3, 00:09:02.536 "num_base_bdevs_discovered": 3, 00:09:02.536 "num_base_bdevs_operational": 3, 00:09:02.536 "base_bdevs_list": [ 00:09:02.536 { 00:09:02.536 "name": "pt1", 00:09:02.536 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:02.536 "is_configured": true, 00:09:02.536 "data_offset": 2048, 00:09:02.536 "data_size": 63488 00:09:02.536 }, 00:09:02.536 { 00:09:02.536 "name": "pt2", 00:09:02.536 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:02.536 "is_configured": true, 00:09:02.536 "data_offset": 2048, 00:09:02.536 "data_size": 63488 00:09:02.536 }, 00:09:02.536 { 00:09:02.536 "name": "pt3", 00:09:02.536 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:02.536 "is_configured": true, 00:09:02.536 "data_offset": 2048, 00:09:02.536 "data_size": 63488 00:09:02.536 } 00:09:02.536 ] 00:09:02.536 }' 00:09:02.536 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.536 04:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.106 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:03.106 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:03.106 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:03.106 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:03.106 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:03.106 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:03.106 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:03.106 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:03.107 04:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.107 04:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.107 [2024-12-13 04:25:02.835739] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:03.107 04:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.107 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:03.107 "name": "raid_bdev1", 00:09:03.107 "aliases": [ 00:09:03.107 "a2f5da52-33b5-4f0e-a436-0a73903ad57a" 00:09:03.107 ], 00:09:03.107 "product_name": "Raid Volume", 00:09:03.107 "block_size": 512, 00:09:03.107 "num_blocks": 190464, 00:09:03.107 "uuid": "a2f5da52-33b5-4f0e-a436-0a73903ad57a", 00:09:03.107 "assigned_rate_limits": { 00:09:03.107 "rw_ios_per_sec": 0, 00:09:03.107 "rw_mbytes_per_sec": 0, 00:09:03.107 "r_mbytes_per_sec": 0, 00:09:03.107 "w_mbytes_per_sec": 0 00:09:03.107 }, 00:09:03.107 "claimed": false, 00:09:03.107 "zoned": false, 00:09:03.107 "supported_io_types": { 00:09:03.107 "read": true, 00:09:03.107 "write": true, 00:09:03.107 "unmap": true, 00:09:03.107 "flush": true, 00:09:03.107 "reset": true, 00:09:03.107 "nvme_admin": false, 00:09:03.107 "nvme_io": false, 00:09:03.107 "nvme_io_md": false, 00:09:03.107 "write_zeroes": true, 00:09:03.107 "zcopy": false, 00:09:03.107 "get_zone_info": false, 00:09:03.107 "zone_management": false, 00:09:03.107 "zone_append": false, 00:09:03.107 "compare": false, 00:09:03.107 "compare_and_write": false, 00:09:03.107 "abort": false, 00:09:03.107 "seek_hole": false, 00:09:03.107 "seek_data": false, 00:09:03.107 "copy": false, 00:09:03.107 "nvme_iov_md": false 00:09:03.107 }, 00:09:03.107 "memory_domains": [ 00:09:03.107 { 00:09:03.107 "dma_device_id": "system", 00:09:03.107 "dma_device_type": 1 00:09:03.107 }, 00:09:03.107 { 00:09:03.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.107 "dma_device_type": 2 00:09:03.107 }, 00:09:03.107 { 00:09:03.107 "dma_device_id": "system", 00:09:03.107 "dma_device_type": 1 00:09:03.107 }, 00:09:03.107 { 00:09:03.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.107 "dma_device_type": 2 00:09:03.107 }, 00:09:03.107 { 00:09:03.107 "dma_device_id": "system", 00:09:03.107 "dma_device_type": 1 00:09:03.107 }, 00:09:03.107 { 00:09:03.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.107 "dma_device_type": 2 00:09:03.107 } 00:09:03.107 ], 00:09:03.107 "driver_specific": { 00:09:03.107 "raid": { 00:09:03.107 "uuid": "a2f5da52-33b5-4f0e-a436-0a73903ad57a", 00:09:03.107 "strip_size_kb": 64, 00:09:03.107 "state": "online", 00:09:03.107 "raid_level": "concat", 00:09:03.107 "superblock": true, 00:09:03.107 "num_base_bdevs": 3, 00:09:03.107 "num_base_bdevs_discovered": 3, 00:09:03.107 "num_base_bdevs_operational": 3, 00:09:03.107 "base_bdevs_list": [ 00:09:03.107 { 00:09:03.107 "name": "pt1", 00:09:03.107 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:03.107 "is_configured": true, 00:09:03.107 "data_offset": 2048, 00:09:03.107 "data_size": 63488 00:09:03.107 }, 00:09:03.107 { 00:09:03.107 "name": "pt2", 00:09:03.107 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:03.107 "is_configured": true, 00:09:03.107 "data_offset": 2048, 00:09:03.107 "data_size": 63488 00:09:03.107 }, 00:09:03.107 { 00:09:03.107 "name": "pt3", 00:09:03.107 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:03.107 "is_configured": true, 00:09:03.107 "data_offset": 2048, 00:09:03.107 "data_size": 63488 00:09:03.107 } 00:09:03.107 ] 00:09:03.107 } 00:09:03.107 } 00:09:03.107 }' 00:09:03.107 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:03.107 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:03.107 pt2 00:09:03.107 pt3' 00:09:03.107 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.107 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:03.107 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:03.107 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:03.107 04:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.107 04:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.107 04:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.107 04:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.107 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:03.107 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:03.107 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:03.107 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:03.107 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.107 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.107 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.107 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.107 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:03.107 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:03.107 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:03.107 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:03.107 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.107 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.107 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.107 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.107 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:03.107 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:03.107 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:03.107 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:03.107 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.107 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.107 [2024-12-13 04:25:03.103145] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a2f5da52-33b5-4f0e-a436-0a73903ad57a 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a2f5da52-33b5-4f0e-a436-0a73903ad57a ']' 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.368 [2024-12-13 04:25:03.146810] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:03.368 [2024-12-13 04:25:03.146882] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:03.368 [2024-12-13 04:25:03.147004] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:03.368 [2024-12-13 04:25:03.147087] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:03.368 [2024-12-13 04:25:03.147104] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.368 [2024-12-13 04:25:03.294613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:03.368 [2024-12-13 04:25:03.296808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:03.368 [2024-12-13 04:25:03.296911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:03.368 [2024-12-13 04:25:03.296973] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:03.368 [2024-12-13 04:25:03.297018] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:03.368 [2024-12-13 04:25:03.297054] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:03.368 [2024-12-13 04:25:03.297067] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:03.368 [2024-12-13 04:25:03.297078] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:09:03.368 request: 00:09:03.368 { 00:09:03.368 "name": "raid_bdev1", 00:09:03.368 "raid_level": "concat", 00:09:03.368 "base_bdevs": [ 00:09:03.368 "malloc1", 00:09:03.368 "malloc2", 00:09:03.368 "malloc3" 00:09:03.368 ], 00:09:03.368 "strip_size_kb": 64, 00:09:03.368 "superblock": false, 00:09:03.368 "method": "bdev_raid_create", 00:09:03.368 "req_id": 1 00:09:03.368 } 00:09:03.368 Got JSON-RPC error response 00:09:03.368 response: 00:09:03.368 { 00:09:03.368 "code": -17, 00:09:03.368 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:03.368 } 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.368 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.368 [2024-12-13 04:25:03.358473] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:03.368 [2024-12-13 04:25:03.358565] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:03.368 [2024-12-13 04:25:03.358599] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:03.368 [2024-12-13 04:25:03.358627] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:03.368 [2024-12-13 04:25:03.361144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:03.368 [2024-12-13 04:25:03.361236] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:03.368 [2024-12-13 04:25:03.361330] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:03.369 [2024-12-13 04:25:03.361400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:03.369 pt1 00:09:03.369 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.369 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:03.369 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:03.369 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.369 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:03.369 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.369 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.369 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.369 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.369 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.369 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.369 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.369 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:03.369 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.369 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.628 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.628 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.628 "name": "raid_bdev1", 00:09:03.628 "uuid": "a2f5da52-33b5-4f0e-a436-0a73903ad57a", 00:09:03.628 "strip_size_kb": 64, 00:09:03.628 "state": "configuring", 00:09:03.628 "raid_level": "concat", 00:09:03.628 "superblock": true, 00:09:03.628 "num_base_bdevs": 3, 00:09:03.628 "num_base_bdevs_discovered": 1, 00:09:03.628 "num_base_bdevs_operational": 3, 00:09:03.628 "base_bdevs_list": [ 00:09:03.628 { 00:09:03.628 "name": "pt1", 00:09:03.628 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:03.628 "is_configured": true, 00:09:03.628 "data_offset": 2048, 00:09:03.628 "data_size": 63488 00:09:03.628 }, 00:09:03.628 { 00:09:03.628 "name": null, 00:09:03.628 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:03.628 "is_configured": false, 00:09:03.628 "data_offset": 2048, 00:09:03.628 "data_size": 63488 00:09:03.628 }, 00:09:03.628 { 00:09:03.628 "name": null, 00:09:03.628 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:03.628 "is_configured": false, 00:09:03.628 "data_offset": 2048, 00:09:03.628 "data_size": 63488 00:09:03.628 } 00:09:03.628 ] 00:09:03.628 }' 00:09:03.628 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.628 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.889 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:03.889 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:03.889 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.889 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.889 [2024-12-13 04:25:03.833684] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:03.889 [2024-12-13 04:25:03.833766] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:03.889 [2024-12-13 04:25:03.833792] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:03.889 [2024-12-13 04:25:03.833806] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:03.889 [2024-12-13 04:25:03.834279] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:03.889 [2024-12-13 04:25:03.834302] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:03.889 [2024-12-13 04:25:03.834389] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:03.889 [2024-12-13 04:25:03.834415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:03.889 pt2 00:09:03.889 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.889 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:03.889 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.889 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.889 [2024-12-13 04:25:03.841662] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:03.889 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.889 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:03.889 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:03.889 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.889 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:03.889 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.889 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.889 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.889 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.889 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.889 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.889 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.889 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:03.889 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.889 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.889 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.889 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.889 "name": "raid_bdev1", 00:09:03.889 "uuid": "a2f5da52-33b5-4f0e-a436-0a73903ad57a", 00:09:03.889 "strip_size_kb": 64, 00:09:03.889 "state": "configuring", 00:09:03.889 "raid_level": "concat", 00:09:03.889 "superblock": true, 00:09:03.889 "num_base_bdevs": 3, 00:09:03.889 "num_base_bdevs_discovered": 1, 00:09:03.889 "num_base_bdevs_operational": 3, 00:09:03.889 "base_bdevs_list": [ 00:09:03.889 { 00:09:03.889 "name": "pt1", 00:09:03.889 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:03.889 "is_configured": true, 00:09:03.889 "data_offset": 2048, 00:09:03.889 "data_size": 63488 00:09:03.889 }, 00:09:03.889 { 00:09:03.889 "name": null, 00:09:03.889 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:03.889 "is_configured": false, 00:09:03.889 "data_offset": 0, 00:09:03.889 "data_size": 63488 00:09:03.889 }, 00:09:03.889 { 00:09:03.889 "name": null, 00:09:03.889 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:03.889 "is_configured": false, 00:09:03.889 "data_offset": 2048, 00:09:03.889 "data_size": 63488 00:09:03.889 } 00:09:03.889 ] 00:09:03.889 }' 00:09:03.889 04:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.889 04:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.459 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:04.459 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:04.459 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:04.459 04:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.459 04:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.459 [2024-12-13 04:25:04.328924] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:04.459 [2024-12-13 04:25:04.328996] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:04.459 [2024-12-13 04:25:04.329020] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:04.459 [2024-12-13 04:25:04.329029] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:04.459 [2024-12-13 04:25:04.329530] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:04.459 [2024-12-13 04:25:04.329549] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:04.459 [2024-12-13 04:25:04.329638] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:04.459 [2024-12-13 04:25:04.329662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:04.459 pt2 00:09:04.459 04:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.459 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:04.459 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:04.459 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:04.459 04:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.459 04:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.459 [2024-12-13 04:25:04.340876] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:04.459 [2024-12-13 04:25:04.340926] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:04.459 [2024-12-13 04:25:04.340951] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:04.459 [2024-12-13 04:25:04.340959] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:04.459 [2024-12-13 04:25:04.341317] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:04.459 [2024-12-13 04:25:04.341333] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:04.459 [2024-12-13 04:25:04.341391] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:04.459 [2024-12-13 04:25:04.341408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:04.459 [2024-12-13 04:25:04.341536] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:04.459 [2024-12-13 04:25:04.341547] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:04.459 [2024-12-13 04:25:04.341788] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:04.459 [2024-12-13 04:25:04.341900] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:04.459 [2024-12-13 04:25:04.341912] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:09:04.459 [2024-12-13 04:25:04.342027] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:04.459 pt3 00:09:04.459 04:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.459 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:04.459 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:04.459 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:04.459 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:04.459 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:04.459 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:04.459 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.459 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.459 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.459 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.459 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.459 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.459 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.459 04:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.459 04:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.459 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:04.459 04:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.459 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.459 "name": "raid_bdev1", 00:09:04.459 "uuid": "a2f5da52-33b5-4f0e-a436-0a73903ad57a", 00:09:04.459 "strip_size_kb": 64, 00:09:04.459 "state": "online", 00:09:04.459 "raid_level": "concat", 00:09:04.459 "superblock": true, 00:09:04.459 "num_base_bdevs": 3, 00:09:04.459 "num_base_bdevs_discovered": 3, 00:09:04.459 "num_base_bdevs_operational": 3, 00:09:04.459 "base_bdevs_list": [ 00:09:04.459 { 00:09:04.459 "name": "pt1", 00:09:04.459 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:04.459 "is_configured": true, 00:09:04.459 "data_offset": 2048, 00:09:04.459 "data_size": 63488 00:09:04.459 }, 00:09:04.459 { 00:09:04.459 "name": "pt2", 00:09:04.459 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:04.459 "is_configured": true, 00:09:04.459 "data_offset": 2048, 00:09:04.459 "data_size": 63488 00:09:04.459 }, 00:09:04.459 { 00:09:04.459 "name": "pt3", 00:09:04.459 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:04.459 "is_configured": true, 00:09:04.459 "data_offset": 2048, 00:09:04.459 "data_size": 63488 00:09:04.459 } 00:09:04.459 ] 00:09:04.459 }' 00:09:04.459 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.459 04:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.720 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:04.720 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:04.720 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:04.720 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:04.720 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:04.720 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:04.720 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:04.720 04:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.720 04:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.720 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:04.720 [2024-12-13 04:25:04.716616] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:04.720 04:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.980 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:04.980 "name": "raid_bdev1", 00:09:04.980 "aliases": [ 00:09:04.980 "a2f5da52-33b5-4f0e-a436-0a73903ad57a" 00:09:04.980 ], 00:09:04.980 "product_name": "Raid Volume", 00:09:04.980 "block_size": 512, 00:09:04.980 "num_blocks": 190464, 00:09:04.980 "uuid": "a2f5da52-33b5-4f0e-a436-0a73903ad57a", 00:09:04.980 "assigned_rate_limits": { 00:09:04.980 "rw_ios_per_sec": 0, 00:09:04.980 "rw_mbytes_per_sec": 0, 00:09:04.980 "r_mbytes_per_sec": 0, 00:09:04.980 "w_mbytes_per_sec": 0 00:09:04.980 }, 00:09:04.980 "claimed": false, 00:09:04.980 "zoned": false, 00:09:04.980 "supported_io_types": { 00:09:04.980 "read": true, 00:09:04.980 "write": true, 00:09:04.980 "unmap": true, 00:09:04.980 "flush": true, 00:09:04.980 "reset": true, 00:09:04.980 "nvme_admin": false, 00:09:04.980 "nvme_io": false, 00:09:04.980 "nvme_io_md": false, 00:09:04.980 "write_zeroes": true, 00:09:04.980 "zcopy": false, 00:09:04.980 "get_zone_info": false, 00:09:04.980 "zone_management": false, 00:09:04.980 "zone_append": false, 00:09:04.980 "compare": false, 00:09:04.980 "compare_and_write": false, 00:09:04.980 "abort": false, 00:09:04.980 "seek_hole": false, 00:09:04.980 "seek_data": false, 00:09:04.980 "copy": false, 00:09:04.980 "nvme_iov_md": false 00:09:04.980 }, 00:09:04.980 "memory_domains": [ 00:09:04.980 { 00:09:04.980 "dma_device_id": "system", 00:09:04.980 "dma_device_type": 1 00:09:04.980 }, 00:09:04.980 { 00:09:04.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.980 "dma_device_type": 2 00:09:04.980 }, 00:09:04.980 { 00:09:04.980 "dma_device_id": "system", 00:09:04.980 "dma_device_type": 1 00:09:04.980 }, 00:09:04.980 { 00:09:04.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.980 "dma_device_type": 2 00:09:04.980 }, 00:09:04.980 { 00:09:04.980 "dma_device_id": "system", 00:09:04.980 "dma_device_type": 1 00:09:04.980 }, 00:09:04.980 { 00:09:04.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.980 "dma_device_type": 2 00:09:04.980 } 00:09:04.980 ], 00:09:04.980 "driver_specific": { 00:09:04.980 "raid": { 00:09:04.980 "uuid": "a2f5da52-33b5-4f0e-a436-0a73903ad57a", 00:09:04.980 "strip_size_kb": 64, 00:09:04.980 "state": "online", 00:09:04.980 "raid_level": "concat", 00:09:04.980 "superblock": true, 00:09:04.980 "num_base_bdevs": 3, 00:09:04.980 "num_base_bdevs_discovered": 3, 00:09:04.980 "num_base_bdevs_operational": 3, 00:09:04.980 "base_bdevs_list": [ 00:09:04.980 { 00:09:04.980 "name": "pt1", 00:09:04.980 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:04.980 "is_configured": true, 00:09:04.980 "data_offset": 2048, 00:09:04.980 "data_size": 63488 00:09:04.980 }, 00:09:04.980 { 00:09:04.980 "name": "pt2", 00:09:04.980 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:04.980 "is_configured": true, 00:09:04.980 "data_offset": 2048, 00:09:04.980 "data_size": 63488 00:09:04.980 }, 00:09:04.980 { 00:09:04.980 "name": "pt3", 00:09:04.980 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:04.980 "is_configured": true, 00:09:04.980 "data_offset": 2048, 00:09:04.980 "data_size": 63488 00:09:04.980 } 00:09:04.980 ] 00:09:04.980 } 00:09:04.980 } 00:09:04.980 }' 00:09:04.980 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:04.980 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:04.980 pt2 00:09:04.980 pt3' 00:09:04.980 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:04.980 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:04.980 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:04.980 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:04.980 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:04.980 04:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.980 04:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.980 04:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.980 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:04.980 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:04.980 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:04.980 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:04.980 04:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.980 04:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.980 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:04.980 04:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.980 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:04.980 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:04.980 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:04.980 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:04.980 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:04.980 04:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.980 04:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.980 04:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.240 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:05.240 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:05.240 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:05.241 04:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:05.241 04:25:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.241 04:25:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.241 [2024-12-13 04:25:05.007980] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:05.241 04:25:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.241 04:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a2f5da52-33b5-4f0e-a436-0a73903ad57a '!=' a2f5da52-33b5-4f0e-a436-0a73903ad57a ']' 00:09:05.241 04:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:05.241 04:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:05.241 04:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:05.241 04:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 79625 00:09:05.241 04:25:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 79625 ']' 00:09:05.241 04:25:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 79625 00:09:05.241 04:25:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:05.241 04:25:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:05.241 04:25:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79625 00:09:05.241 04:25:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:05.241 04:25:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:05.241 04:25:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79625' 00:09:05.241 killing process with pid 79625 00:09:05.241 04:25:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 79625 00:09:05.241 [2024-12-13 04:25:05.081593] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:05.241 [2024-12-13 04:25:05.081693] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:05.241 04:25:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 79625 00:09:05.241 [2024-12-13 04:25:05.081777] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:05.241 [2024-12-13 04:25:05.081788] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:09:05.241 [2024-12-13 04:25:05.143081] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:05.501 04:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:05.501 ************************************ 00:09:05.501 END TEST raid_superblock_test 00:09:05.501 ************************************ 00:09:05.501 00:09:05.501 real 0m4.061s 00:09:05.501 user 0m6.240s 00:09:05.501 sys 0m0.940s 00:09:05.501 04:25:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.501 04:25:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.760 04:25:05 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:05.760 04:25:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:05.760 04:25:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:05.760 04:25:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:05.760 ************************************ 00:09:05.760 START TEST raid_read_error_test 00:09:05.760 ************************************ 00:09:05.760 04:25:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:09:05.760 04:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:05.760 04:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:05.760 04:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:05.760 04:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:05.760 04:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:05.760 04:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:05.760 04:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:05.760 04:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:05.760 04:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:05.760 04:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:05.760 04:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:05.760 04:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:05.760 04:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:05.760 04:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:05.760 04:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:05.760 04:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:05.760 04:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:05.760 04:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:05.760 04:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:05.760 04:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:05.760 04:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:05.760 04:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:05.760 04:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:05.760 04:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:05.760 04:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:05.760 04:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.JcXLXjNmGz 00:09:05.760 04:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=79866 00:09:05.760 04:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:05.760 04:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 79866 00:09:05.760 04:25:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 79866 ']' 00:09:05.760 04:25:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.760 04:25:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:05.760 04:25:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.760 04:25:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:05.760 04:25:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.760 [2024-12-13 04:25:05.652331] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:05.760 [2024-12-13 04:25:05.652746] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79866 ] 00:09:06.019 [2024-12-13 04:25:05.809253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.019 [2024-12-13 04:25:05.848108] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.019 [2024-12-13 04:25:05.923600] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:06.019 [2024-12-13 04:25:05.923715] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:06.588 04:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:06.588 04:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:06.588 04:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:06.589 04:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:06.589 04:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.589 04:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.589 BaseBdev1_malloc 00:09:06.589 04:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.589 04:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:06.589 04:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.589 04:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.589 true 00:09:06.589 04:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.589 04:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:06.589 04:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.589 04:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.589 [2024-12-13 04:25:06.529604] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:06.589 [2024-12-13 04:25:06.529763] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:06.589 [2024-12-13 04:25:06.529813] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:09:06.589 [2024-12-13 04:25:06.529823] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:06.589 [2024-12-13 04:25:06.532590] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:06.589 [2024-12-13 04:25:06.532632] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:06.589 BaseBdev1 00:09:06.589 04:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.589 04:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:06.589 04:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:06.589 04:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.589 04:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.589 BaseBdev2_malloc 00:09:06.589 04:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.589 04:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:06.589 04:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.589 04:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.589 true 00:09:06.589 04:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.589 04:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:06.589 04:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.589 04:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.589 [2024-12-13 04:25:06.577388] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:06.589 [2024-12-13 04:25:06.577475] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:06.589 [2024-12-13 04:25:06.577499] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:09:06.589 [2024-12-13 04:25:06.577518] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:06.589 [2024-12-13 04:25:06.579882] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:06.589 [2024-12-13 04:25:06.579988] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:06.589 BaseBdev2 00:09:06.589 04:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.589 04:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:06.589 04:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:06.589 04:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.589 04:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.849 BaseBdev3_malloc 00:09:06.849 04:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.849 04:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:06.849 04:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.849 04:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.849 true 00:09:06.849 04:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.849 04:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:06.849 04:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.849 04:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.849 [2024-12-13 04:25:06.624052] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:06.849 [2024-12-13 04:25:06.624101] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:06.849 [2024-12-13 04:25:06.624124] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:09:06.849 [2024-12-13 04:25:06.624133] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:06.849 [2024-12-13 04:25:06.626513] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:06.849 [2024-12-13 04:25:06.626607] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:06.849 BaseBdev3 00:09:06.849 04:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.849 04:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:06.849 04:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.849 04:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.849 [2024-12-13 04:25:06.636070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:06.849 [2024-12-13 04:25:06.638167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:06.849 [2024-12-13 04:25:06.638296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:06.849 [2024-12-13 04:25:06.638526] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:06.849 [2024-12-13 04:25:06.638543] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:06.849 [2024-12-13 04:25:06.638810] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002bb0 00:09:06.849 [2024-12-13 04:25:06.638962] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:06.849 [2024-12-13 04:25:06.638973] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:09:06.849 [2024-12-13 04:25:06.639106] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:06.849 04:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.849 04:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:06.849 04:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:06.849 04:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:06.849 04:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:06.849 04:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.849 04:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.849 04:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.849 04:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.849 04:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.849 04:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.849 04:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.849 04:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.849 04:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.849 04:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:06.849 04:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.849 04:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.849 "name": "raid_bdev1", 00:09:06.849 "uuid": "3b4980b8-9a99-45a5-93f1-2979dbe730f7", 00:09:06.849 "strip_size_kb": 64, 00:09:06.849 "state": "online", 00:09:06.849 "raid_level": "concat", 00:09:06.849 "superblock": true, 00:09:06.849 "num_base_bdevs": 3, 00:09:06.849 "num_base_bdevs_discovered": 3, 00:09:06.849 "num_base_bdevs_operational": 3, 00:09:06.849 "base_bdevs_list": [ 00:09:06.849 { 00:09:06.849 "name": "BaseBdev1", 00:09:06.849 "uuid": "8c64c784-ebc8-5d05-a93f-c34b9743265b", 00:09:06.849 "is_configured": true, 00:09:06.849 "data_offset": 2048, 00:09:06.849 "data_size": 63488 00:09:06.849 }, 00:09:06.849 { 00:09:06.849 "name": "BaseBdev2", 00:09:06.849 "uuid": "698345e0-4dfd-5593-81eb-dbc85a48afd8", 00:09:06.849 "is_configured": true, 00:09:06.849 "data_offset": 2048, 00:09:06.849 "data_size": 63488 00:09:06.849 }, 00:09:06.849 { 00:09:06.849 "name": "BaseBdev3", 00:09:06.849 "uuid": "41a41d98-4039-5fc9-99c0-5e2113496f00", 00:09:06.849 "is_configured": true, 00:09:06.849 "data_offset": 2048, 00:09:06.849 "data_size": 63488 00:09:06.849 } 00:09:06.849 ] 00:09:06.849 }' 00:09:06.849 04:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.849 04:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.109 04:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:07.109 04:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:07.369 [2024-12-13 04:25:07.211645] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002d50 00:09:08.309 04:25:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:08.309 04:25:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.309 04:25:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.309 04:25:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.309 04:25:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:08.309 04:25:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:08.309 04:25:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:08.309 04:25:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:08.309 04:25:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:08.309 04:25:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:08.309 04:25:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:08.309 04:25:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.309 04:25:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.309 04:25:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.309 04:25:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.309 04:25:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.309 04:25:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.309 04:25:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.309 04:25:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:08.309 04:25:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.309 04:25:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.309 04:25:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.309 04:25:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.309 "name": "raid_bdev1", 00:09:08.309 "uuid": "3b4980b8-9a99-45a5-93f1-2979dbe730f7", 00:09:08.309 "strip_size_kb": 64, 00:09:08.309 "state": "online", 00:09:08.309 "raid_level": "concat", 00:09:08.309 "superblock": true, 00:09:08.309 "num_base_bdevs": 3, 00:09:08.309 "num_base_bdevs_discovered": 3, 00:09:08.309 "num_base_bdevs_operational": 3, 00:09:08.309 "base_bdevs_list": [ 00:09:08.309 { 00:09:08.309 "name": "BaseBdev1", 00:09:08.309 "uuid": "8c64c784-ebc8-5d05-a93f-c34b9743265b", 00:09:08.309 "is_configured": true, 00:09:08.309 "data_offset": 2048, 00:09:08.309 "data_size": 63488 00:09:08.309 }, 00:09:08.309 { 00:09:08.309 "name": "BaseBdev2", 00:09:08.309 "uuid": "698345e0-4dfd-5593-81eb-dbc85a48afd8", 00:09:08.309 "is_configured": true, 00:09:08.309 "data_offset": 2048, 00:09:08.309 "data_size": 63488 00:09:08.309 }, 00:09:08.309 { 00:09:08.309 "name": "BaseBdev3", 00:09:08.309 "uuid": "41a41d98-4039-5fc9-99c0-5e2113496f00", 00:09:08.309 "is_configured": true, 00:09:08.309 "data_offset": 2048, 00:09:08.309 "data_size": 63488 00:09:08.309 } 00:09:08.309 ] 00:09:08.309 }' 00:09:08.309 04:25:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.309 04:25:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.569 04:25:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:08.569 04:25:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.569 04:25:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.569 [2024-12-13 04:25:08.580345] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:08.569 [2024-12-13 04:25:08.580520] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:08.569 [2024-12-13 04:25:08.583251] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:08.569 [2024-12-13 04:25:08.583348] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:08.569 [2024-12-13 04:25:08.583408] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:08.569 [2024-12-13 04:25:08.583477] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:09:08.829 { 00:09:08.829 "results": [ 00:09:08.829 { 00:09:08.829 "job": "raid_bdev1", 00:09:08.829 "core_mask": "0x1", 00:09:08.829 "workload": "randrw", 00:09:08.829 "percentage": 50, 00:09:08.829 "status": "finished", 00:09:08.829 "queue_depth": 1, 00:09:08.829 "io_size": 131072, 00:09:08.829 "runtime": 1.369394, 00:09:08.829 "iops": 14324.58445122441, 00:09:08.829 "mibps": 1790.5730564030512, 00:09:08.829 "io_failed": 1, 00:09:08.829 "io_timeout": 0, 00:09:08.829 "avg_latency_us": 97.80641894907566, 00:09:08.829 "min_latency_us": 25.823580786026202, 00:09:08.829 "max_latency_us": 1416.6078602620087 00:09:08.829 } 00:09:08.829 ], 00:09:08.829 "core_count": 1 00:09:08.829 } 00:09:08.829 04:25:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.829 04:25:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 79866 00:09:08.829 04:25:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 79866 ']' 00:09:08.829 04:25:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 79866 00:09:08.829 04:25:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:08.829 04:25:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:08.829 04:25:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79866 00:09:08.829 04:25:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:08.829 04:25:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:08.830 04:25:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79866' 00:09:08.830 killing process with pid 79866 00:09:08.830 04:25:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 79866 00:09:08.830 [2024-12-13 04:25:08.632243] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:08.830 04:25:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 79866 00:09:08.830 [2024-12-13 04:25:08.680979] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:09.090 04:25:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.JcXLXjNmGz 00:09:09.090 04:25:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:09.090 04:25:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:09.090 04:25:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:09.090 04:25:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:09.090 04:25:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:09.090 04:25:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:09.090 04:25:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:09.090 00:09:09.090 real 0m3.474s 00:09:09.090 user 0m4.274s 00:09:09.090 sys 0m0.655s 00:09:09.090 04:25:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.090 ************************************ 00:09:09.090 END TEST raid_read_error_test 00:09:09.090 ************************************ 00:09:09.090 04:25:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.090 04:25:09 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:09.090 04:25:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:09.090 04:25:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.090 04:25:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:09.090 ************************************ 00:09:09.090 START TEST raid_write_error_test 00:09:09.090 ************************************ 00:09:09.090 04:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:09:09.090 04:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:09.090 04:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:09.090 04:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:09.090 04:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:09.090 04:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:09.090 04:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:09.090 04:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:09.090 04:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:09.090 04:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:09.090 04:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:09.090 04:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:09.090 04:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:09.090 04:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:09.090 04:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:09.090 04:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:09.090 04:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:09.090 04:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:09.090 04:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:09.090 04:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:09.090 04:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:09.090 04:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:09.090 04:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:09.090 04:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:09.090 04:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:09.090 04:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:09.350 04:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.GoLDJ0dwDs 00:09:09.350 04:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=80000 00:09:09.350 04:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 80000 00:09:09.350 04:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:09.350 04:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 80000 ']' 00:09:09.350 04:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.350 04:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:09.350 04:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.350 04:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:09.350 04:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.350 [2024-12-13 04:25:09.192659] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:09.350 [2024-12-13 04:25:09.192859] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80000 ] 00:09:09.350 [2024-12-13 04:25:09.347577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.610 [2024-12-13 04:25:09.388502] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.610 [2024-12-13 04:25:09.466633] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:09.610 [2024-12-13 04:25:09.466773] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.181 BaseBdev1_malloc 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.181 true 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.181 [2024-12-13 04:25:10.047912] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:10.181 [2024-12-13 04:25:10.047969] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.181 [2024-12-13 04:25:10.048008] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:09:10.181 [2024-12-13 04:25:10.048025] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.181 [2024-12-13 04:25:10.050558] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.181 [2024-12-13 04:25:10.050593] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:10.181 BaseBdev1 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.181 BaseBdev2_malloc 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.181 true 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.181 [2024-12-13 04:25:10.094647] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:10.181 [2024-12-13 04:25:10.094696] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.181 [2024-12-13 04:25:10.094717] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:09:10.181 [2024-12-13 04:25:10.094744] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.181 [2024-12-13 04:25:10.097072] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.181 [2024-12-13 04:25:10.097108] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:10.181 BaseBdev2 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.181 BaseBdev3_malloc 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.181 true 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.181 [2024-12-13 04:25:10.141181] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:10.181 [2024-12-13 04:25:10.141225] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.181 [2024-12-13 04:25:10.141248] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:09:10.181 [2024-12-13 04:25:10.141256] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.181 [2024-12-13 04:25:10.143621] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.181 [2024-12-13 04:25:10.143654] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:10.181 BaseBdev3 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.181 [2024-12-13 04:25:10.153224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:10.181 [2024-12-13 04:25:10.155343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:10.181 [2024-12-13 04:25:10.155417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:10.181 [2024-12-13 04:25:10.155637] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:10.181 [2024-12-13 04:25:10.155652] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:10.181 [2024-12-13 04:25:10.155908] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002bb0 00:09:10.181 [2024-12-13 04:25:10.156059] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:10.181 [2024-12-13 04:25:10.156077] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:09:10.181 [2024-12-13 04:25:10.156211] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.181 04:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.441 04:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.441 "name": "raid_bdev1", 00:09:10.441 "uuid": "8df2e33b-780c-45aa-b4dc-a70e7eeef826", 00:09:10.441 "strip_size_kb": 64, 00:09:10.441 "state": "online", 00:09:10.441 "raid_level": "concat", 00:09:10.441 "superblock": true, 00:09:10.441 "num_base_bdevs": 3, 00:09:10.441 "num_base_bdevs_discovered": 3, 00:09:10.441 "num_base_bdevs_operational": 3, 00:09:10.441 "base_bdevs_list": [ 00:09:10.441 { 00:09:10.441 "name": "BaseBdev1", 00:09:10.441 "uuid": "634062cb-f8ec-5a25-aff8-fdca36239a59", 00:09:10.441 "is_configured": true, 00:09:10.441 "data_offset": 2048, 00:09:10.441 "data_size": 63488 00:09:10.441 }, 00:09:10.441 { 00:09:10.441 "name": "BaseBdev2", 00:09:10.441 "uuid": "176a3a02-bd54-595a-9268-bd2297ee8a52", 00:09:10.441 "is_configured": true, 00:09:10.441 "data_offset": 2048, 00:09:10.441 "data_size": 63488 00:09:10.441 }, 00:09:10.441 { 00:09:10.441 "name": "BaseBdev3", 00:09:10.441 "uuid": "3ddc96ea-1d61-590e-a24e-ebc2aab05912", 00:09:10.441 "is_configured": true, 00:09:10.441 "data_offset": 2048, 00:09:10.441 "data_size": 63488 00:09:10.441 } 00:09:10.441 ] 00:09:10.441 }' 00:09:10.441 04:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.441 04:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.700 04:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:10.700 04:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:10.960 [2024-12-13 04:25:10.740843] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002d50 00:09:11.900 04:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:11.900 04:25:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.900 04:25:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.900 04:25:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.900 04:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:11.900 04:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:11.900 04:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:11.900 04:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:11.900 04:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:11.900 04:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:11.900 04:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:11.900 04:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.900 04:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.900 04:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.900 04:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.900 04:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.900 04:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.900 04:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.900 04:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:11.900 04:25:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.900 04:25:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.900 04:25:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.900 04:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.900 "name": "raid_bdev1", 00:09:11.900 "uuid": "8df2e33b-780c-45aa-b4dc-a70e7eeef826", 00:09:11.900 "strip_size_kb": 64, 00:09:11.900 "state": "online", 00:09:11.900 "raid_level": "concat", 00:09:11.900 "superblock": true, 00:09:11.900 "num_base_bdevs": 3, 00:09:11.900 "num_base_bdevs_discovered": 3, 00:09:11.900 "num_base_bdevs_operational": 3, 00:09:11.900 "base_bdevs_list": [ 00:09:11.900 { 00:09:11.900 "name": "BaseBdev1", 00:09:11.900 "uuid": "634062cb-f8ec-5a25-aff8-fdca36239a59", 00:09:11.900 "is_configured": true, 00:09:11.900 "data_offset": 2048, 00:09:11.900 "data_size": 63488 00:09:11.900 }, 00:09:11.900 { 00:09:11.900 "name": "BaseBdev2", 00:09:11.900 "uuid": "176a3a02-bd54-595a-9268-bd2297ee8a52", 00:09:11.900 "is_configured": true, 00:09:11.900 "data_offset": 2048, 00:09:11.900 "data_size": 63488 00:09:11.900 }, 00:09:11.900 { 00:09:11.900 "name": "BaseBdev3", 00:09:11.900 "uuid": "3ddc96ea-1d61-590e-a24e-ebc2aab05912", 00:09:11.900 "is_configured": true, 00:09:11.900 "data_offset": 2048, 00:09:11.900 "data_size": 63488 00:09:11.900 } 00:09:11.900 ] 00:09:11.900 }' 00:09:11.900 04:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.900 04:25:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.161 04:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:12.161 04:25:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.161 04:25:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.161 [2024-12-13 04:25:12.121504] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:12.161 [2024-12-13 04:25:12.121626] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:12.161 [2024-12-13 04:25:12.124367] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:12.161 [2024-12-13 04:25:12.124480] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:12.161 [2024-12-13 04:25:12.124538] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:12.161 [2024-12-13 04:25:12.124600] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:09:12.161 { 00:09:12.161 "results": [ 00:09:12.161 { 00:09:12.161 "job": "raid_bdev1", 00:09:12.161 "core_mask": "0x1", 00:09:12.161 "workload": "randrw", 00:09:12.161 "percentage": 50, 00:09:12.161 "status": "finished", 00:09:12.161 "queue_depth": 1, 00:09:12.161 "io_size": 131072, 00:09:12.161 "runtime": 1.381465, 00:09:12.161 "iops": 14543.256615259887, 00:09:12.161 "mibps": 1817.9070769074858, 00:09:12.161 "io_failed": 1, 00:09:12.161 "io_timeout": 0, 00:09:12.161 "avg_latency_us": 96.10614857246188, 00:09:12.161 "min_latency_us": 25.152838427947597, 00:09:12.161 "max_latency_us": 1345.0620087336245 00:09:12.161 } 00:09:12.161 ], 00:09:12.161 "core_count": 1 00:09:12.161 } 00:09:12.161 04:25:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.161 04:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 80000 00:09:12.161 04:25:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 80000 ']' 00:09:12.161 04:25:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 80000 00:09:12.161 04:25:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:12.161 04:25:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:12.161 04:25:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80000 00:09:12.161 killing process with pid 80000 00:09:12.161 04:25:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:12.161 04:25:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:12.161 04:25:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80000' 00:09:12.161 04:25:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 80000 00:09:12.161 [2024-12-13 04:25:12.163893] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:12.161 04:25:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 80000 00:09:12.421 [2024-12-13 04:25:12.212851] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:12.682 04:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:12.682 04:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.GoLDJ0dwDs 00:09:12.682 04:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:12.682 04:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:12.682 04:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:12.682 04:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:12.682 04:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:12.682 ************************************ 00:09:12.682 END TEST raid_write_error_test 00:09:12.682 ************************************ 00:09:12.682 04:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:12.682 00:09:12.682 real 0m3.453s 00:09:12.682 user 0m4.310s 00:09:12.682 sys 0m0.601s 00:09:12.682 04:25:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:12.682 04:25:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.682 04:25:12 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:12.682 04:25:12 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:09:12.682 04:25:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:12.682 04:25:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:12.682 04:25:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:12.682 ************************************ 00:09:12.682 START TEST raid_state_function_test 00:09:12.682 ************************************ 00:09:12.682 04:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:09:12.682 04:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:12.682 04:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:12.682 04:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:12.682 04:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:12.682 04:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:12.682 04:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:12.682 04:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:12.682 04:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:12.682 04:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:12.682 04:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:12.682 04:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:12.682 04:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:12.682 04:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:12.682 04:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:12.682 04:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:12.682 04:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:12.682 04:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:12.682 04:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:12.682 04:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:12.682 04:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:12.682 04:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:12.682 04:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:12.682 04:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:12.682 04:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:12.682 04:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:12.682 04:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80133 00:09:12.682 04:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:12.682 04:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80133' 00:09:12.682 Process raid pid: 80133 00:09:12.682 04:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80133 00:09:12.682 04:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80133 ']' 00:09:12.682 04:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.682 04:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:12.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.682 04:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.682 04:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:12.682 04:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.943 [2024-12-13 04:25:12.718895] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:12.943 [2024-12-13 04:25:12.719476] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:12.943 [2024-12-13 04:25:12.875481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.943 [2024-12-13 04:25:12.915337] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.211 [2024-12-13 04:25:12.992143] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.211 [2024-12-13 04:25:12.992189] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.789 04:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:13.789 04:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:13.789 04:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:13.789 04:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.789 04:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.789 [2024-12-13 04:25:13.542914] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:13.789 [2024-12-13 04:25:13.542981] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:13.789 [2024-12-13 04:25:13.543003] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:13.789 [2024-12-13 04:25:13.543015] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:13.789 [2024-12-13 04:25:13.543021] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:13.789 [2024-12-13 04:25:13.543034] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:13.789 04:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.789 04:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:13.789 04:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.789 04:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.789 04:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:13.789 04:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:13.789 04:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.789 04:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.789 04:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.789 04:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.789 04:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.789 04:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.789 04:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.789 04:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.789 04:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.789 04:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.789 04:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.789 "name": "Existed_Raid", 00:09:13.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.789 "strip_size_kb": 0, 00:09:13.789 "state": "configuring", 00:09:13.789 "raid_level": "raid1", 00:09:13.789 "superblock": false, 00:09:13.789 "num_base_bdevs": 3, 00:09:13.789 "num_base_bdevs_discovered": 0, 00:09:13.789 "num_base_bdevs_operational": 3, 00:09:13.789 "base_bdevs_list": [ 00:09:13.789 { 00:09:13.789 "name": "BaseBdev1", 00:09:13.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.789 "is_configured": false, 00:09:13.789 "data_offset": 0, 00:09:13.789 "data_size": 0 00:09:13.789 }, 00:09:13.789 { 00:09:13.789 "name": "BaseBdev2", 00:09:13.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.789 "is_configured": false, 00:09:13.789 "data_offset": 0, 00:09:13.789 "data_size": 0 00:09:13.789 }, 00:09:13.789 { 00:09:13.789 "name": "BaseBdev3", 00:09:13.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.789 "is_configured": false, 00:09:13.789 "data_offset": 0, 00:09:13.789 "data_size": 0 00:09:13.789 } 00:09:13.789 ] 00:09:13.789 }' 00:09:13.789 04:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.789 04:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.048 04:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:14.048 04:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.048 04:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.048 [2024-12-13 04:25:13.962084] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:14.048 [2024-12-13 04:25:13.962195] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:09:14.048 04:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.048 04:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:14.048 04:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.048 04:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.048 [2024-12-13 04:25:13.970097] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:14.049 [2024-12-13 04:25:13.970142] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:14.049 [2024-12-13 04:25:13.970150] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:14.049 [2024-12-13 04:25:13.970160] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:14.049 [2024-12-13 04:25:13.970165] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:14.049 [2024-12-13 04:25:13.970174] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:14.049 04:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.049 04:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:14.049 04:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.049 04:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.049 [2024-12-13 04:25:13.992878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:14.049 BaseBdev1 00:09:14.049 04:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.049 04:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:14.049 04:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:14.049 04:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:14.049 04:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:14.049 04:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:14.049 04:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:14.049 04:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:14.049 04:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.049 04:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.049 04:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.049 04:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:14.049 04:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.049 04:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.049 [ 00:09:14.049 { 00:09:14.049 "name": "BaseBdev1", 00:09:14.049 "aliases": [ 00:09:14.049 "fdd5411f-8351-4eab-86ec-9ad4a721ccb0" 00:09:14.049 ], 00:09:14.049 "product_name": "Malloc disk", 00:09:14.049 "block_size": 512, 00:09:14.049 "num_blocks": 65536, 00:09:14.049 "uuid": "fdd5411f-8351-4eab-86ec-9ad4a721ccb0", 00:09:14.049 "assigned_rate_limits": { 00:09:14.049 "rw_ios_per_sec": 0, 00:09:14.049 "rw_mbytes_per_sec": 0, 00:09:14.049 "r_mbytes_per_sec": 0, 00:09:14.049 "w_mbytes_per_sec": 0 00:09:14.049 }, 00:09:14.049 "claimed": true, 00:09:14.049 "claim_type": "exclusive_write", 00:09:14.049 "zoned": false, 00:09:14.049 "supported_io_types": { 00:09:14.049 "read": true, 00:09:14.049 "write": true, 00:09:14.049 "unmap": true, 00:09:14.049 "flush": true, 00:09:14.049 "reset": true, 00:09:14.049 "nvme_admin": false, 00:09:14.049 "nvme_io": false, 00:09:14.049 "nvme_io_md": false, 00:09:14.049 "write_zeroes": true, 00:09:14.049 "zcopy": true, 00:09:14.049 "get_zone_info": false, 00:09:14.049 "zone_management": false, 00:09:14.049 "zone_append": false, 00:09:14.049 "compare": false, 00:09:14.049 "compare_and_write": false, 00:09:14.049 "abort": true, 00:09:14.049 "seek_hole": false, 00:09:14.049 "seek_data": false, 00:09:14.049 "copy": true, 00:09:14.049 "nvme_iov_md": false 00:09:14.049 }, 00:09:14.049 "memory_domains": [ 00:09:14.049 { 00:09:14.049 "dma_device_id": "system", 00:09:14.049 "dma_device_type": 1 00:09:14.049 }, 00:09:14.049 { 00:09:14.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.049 "dma_device_type": 2 00:09:14.049 } 00:09:14.049 ], 00:09:14.049 "driver_specific": {} 00:09:14.049 } 00:09:14.049 ] 00:09:14.049 04:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.049 04:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:14.049 04:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:14.049 04:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.049 04:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.049 04:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:14.049 04:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:14.049 04:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.049 04:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.049 04:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.049 04:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.049 04:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.049 04:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.049 04:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.049 04:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.049 04:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.049 04:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.309 04:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.309 "name": "Existed_Raid", 00:09:14.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.309 "strip_size_kb": 0, 00:09:14.309 "state": "configuring", 00:09:14.309 "raid_level": "raid1", 00:09:14.309 "superblock": false, 00:09:14.309 "num_base_bdevs": 3, 00:09:14.309 "num_base_bdevs_discovered": 1, 00:09:14.309 "num_base_bdevs_operational": 3, 00:09:14.309 "base_bdevs_list": [ 00:09:14.309 { 00:09:14.309 "name": "BaseBdev1", 00:09:14.309 "uuid": "fdd5411f-8351-4eab-86ec-9ad4a721ccb0", 00:09:14.309 "is_configured": true, 00:09:14.309 "data_offset": 0, 00:09:14.309 "data_size": 65536 00:09:14.309 }, 00:09:14.309 { 00:09:14.309 "name": "BaseBdev2", 00:09:14.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.309 "is_configured": false, 00:09:14.309 "data_offset": 0, 00:09:14.309 "data_size": 0 00:09:14.309 }, 00:09:14.309 { 00:09:14.309 "name": "BaseBdev3", 00:09:14.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.309 "is_configured": false, 00:09:14.309 "data_offset": 0, 00:09:14.309 "data_size": 0 00:09:14.309 } 00:09:14.309 ] 00:09:14.309 }' 00:09:14.309 04:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.309 04:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.569 04:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:14.569 04:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.569 04:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.569 [2024-12-13 04:25:14.448120] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:14.569 [2024-12-13 04:25:14.448210] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:09:14.569 04:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.569 04:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:14.569 04:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.569 04:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.569 [2024-12-13 04:25:14.456125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:14.569 [2024-12-13 04:25:14.458290] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:14.569 [2024-12-13 04:25:14.458381] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:14.569 [2024-12-13 04:25:14.458409] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:14.569 [2024-12-13 04:25:14.458432] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:14.569 04:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.569 04:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:14.569 04:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:14.569 04:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:14.569 04:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.569 04:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.569 04:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:14.569 04:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:14.569 04:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.569 04:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.569 04:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.569 04:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.569 04:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.569 04:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.569 04:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.569 04:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.569 04:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.569 04:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.569 04:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.569 "name": "Existed_Raid", 00:09:14.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.569 "strip_size_kb": 0, 00:09:14.569 "state": "configuring", 00:09:14.569 "raid_level": "raid1", 00:09:14.569 "superblock": false, 00:09:14.569 "num_base_bdevs": 3, 00:09:14.569 "num_base_bdevs_discovered": 1, 00:09:14.569 "num_base_bdevs_operational": 3, 00:09:14.569 "base_bdevs_list": [ 00:09:14.569 { 00:09:14.569 "name": "BaseBdev1", 00:09:14.569 "uuid": "fdd5411f-8351-4eab-86ec-9ad4a721ccb0", 00:09:14.569 "is_configured": true, 00:09:14.569 "data_offset": 0, 00:09:14.569 "data_size": 65536 00:09:14.569 }, 00:09:14.569 { 00:09:14.569 "name": "BaseBdev2", 00:09:14.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.569 "is_configured": false, 00:09:14.569 "data_offset": 0, 00:09:14.569 "data_size": 0 00:09:14.569 }, 00:09:14.569 { 00:09:14.569 "name": "BaseBdev3", 00:09:14.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.569 "is_configured": false, 00:09:14.569 "data_offset": 0, 00:09:14.569 "data_size": 0 00:09:14.569 } 00:09:14.569 ] 00:09:14.569 }' 00:09:14.569 04:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.569 04:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.137 04:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:15.137 04:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.137 04:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.137 [2024-12-13 04:25:14.892078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:15.137 BaseBdev2 00:09:15.137 04:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.137 04:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:15.137 04:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:15.137 04:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:15.137 04:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:15.137 04:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:15.137 04:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:15.137 04:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:15.137 04:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.137 04:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.137 04:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.137 04:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:15.137 04:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.137 04:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.137 [ 00:09:15.137 { 00:09:15.138 "name": "BaseBdev2", 00:09:15.138 "aliases": [ 00:09:15.138 "8332c724-6e7e-4897-8fc0-380280831df0" 00:09:15.138 ], 00:09:15.138 "product_name": "Malloc disk", 00:09:15.138 "block_size": 512, 00:09:15.138 "num_blocks": 65536, 00:09:15.138 "uuid": "8332c724-6e7e-4897-8fc0-380280831df0", 00:09:15.138 "assigned_rate_limits": { 00:09:15.138 "rw_ios_per_sec": 0, 00:09:15.138 "rw_mbytes_per_sec": 0, 00:09:15.138 "r_mbytes_per_sec": 0, 00:09:15.138 "w_mbytes_per_sec": 0 00:09:15.138 }, 00:09:15.138 "claimed": true, 00:09:15.138 "claim_type": "exclusive_write", 00:09:15.138 "zoned": false, 00:09:15.138 "supported_io_types": { 00:09:15.138 "read": true, 00:09:15.138 "write": true, 00:09:15.138 "unmap": true, 00:09:15.138 "flush": true, 00:09:15.138 "reset": true, 00:09:15.138 "nvme_admin": false, 00:09:15.138 "nvme_io": false, 00:09:15.138 "nvme_io_md": false, 00:09:15.138 "write_zeroes": true, 00:09:15.138 "zcopy": true, 00:09:15.138 "get_zone_info": false, 00:09:15.138 "zone_management": false, 00:09:15.138 "zone_append": false, 00:09:15.138 "compare": false, 00:09:15.138 "compare_and_write": false, 00:09:15.138 "abort": true, 00:09:15.138 "seek_hole": false, 00:09:15.138 "seek_data": false, 00:09:15.138 "copy": true, 00:09:15.138 "nvme_iov_md": false 00:09:15.138 }, 00:09:15.138 "memory_domains": [ 00:09:15.138 { 00:09:15.138 "dma_device_id": "system", 00:09:15.138 "dma_device_type": 1 00:09:15.138 }, 00:09:15.138 { 00:09:15.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.138 "dma_device_type": 2 00:09:15.138 } 00:09:15.138 ], 00:09:15.138 "driver_specific": {} 00:09:15.138 } 00:09:15.138 ] 00:09:15.138 04:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.138 04:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:15.138 04:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:15.138 04:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:15.138 04:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:15.138 04:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.138 04:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.138 04:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:15.138 04:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:15.138 04:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.138 04:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.138 04:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.138 04:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.138 04:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.138 04:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.138 04:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.138 04:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.138 04:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.138 04:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.138 04:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.138 "name": "Existed_Raid", 00:09:15.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.138 "strip_size_kb": 0, 00:09:15.138 "state": "configuring", 00:09:15.138 "raid_level": "raid1", 00:09:15.138 "superblock": false, 00:09:15.138 "num_base_bdevs": 3, 00:09:15.138 "num_base_bdevs_discovered": 2, 00:09:15.138 "num_base_bdevs_operational": 3, 00:09:15.138 "base_bdevs_list": [ 00:09:15.138 { 00:09:15.138 "name": "BaseBdev1", 00:09:15.138 "uuid": "fdd5411f-8351-4eab-86ec-9ad4a721ccb0", 00:09:15.138 "is_configured": true, 00:09:15.138 "data_offset": 0, 00:09:15.138 "data_size": 65536 00:09:15.138 }, 00:09:15.138 { 00:09:15.138 "name": "BaseBdev2", 00:09:15.138 "uuid": "8332c724-6e7e-4897-8fc0-380280831df0", 00:09:15.138 "is_configured": true, 00:09:15.138 "data_offset": 0, 00:09:15.138 "data_size": 65536 00:09:15.138 }, 00:09:15.138 { 00:09:15.138 "name": "BaseBdev3", 00:09:15.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.138 "is_configured": false, 00:09:15.138 "data_offset": 0, 00:09:15.138 "data_size": 0 00:09:15.138 } 00:09:15.138 ] 00:09:15.138 }' 00:09:15.138 04:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.138 04:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.397 04:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:15.397 04:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.397 04:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.656 [2024-12-13 04:25:15.415591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:15.656 [2024-12-13 04:25:15.415658] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:15.656 [2024-12-13 04:25:15.415673] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:15.656 [2024-12-13 04:25:15.416070] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:09:15.656 [2024-12-13 04:25:15.416273] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:15.656 [2024-12-13 04:25:15.416287] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:09:15.656 [2024-12-13 04:25:15.416586] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:15.656 BaseBdev3 00:09:15.656 04:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.656 04:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:15.656 04:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:15.656 04:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:15.656 04:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:15.656 04:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:15.656 04:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:15.656 04:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:15.656 04:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.656 04:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.656 04:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.656 04:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:15.656 04:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.656 04:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.656 [ 00:09:15.656 { 00:09:15.656 "name": "BaseBdev3", 00:09:15.656 "aliases": [ 00:09:15.656 "7ace3643-ddbf-406c-92ef-559a43fb4491" 00:09:15.656 ], 00:09:15.656 "product_name": "Malloc disk", 00:09:15.656 "block_size": 512, 00:09:15.656 "num_blocks": 65536, 00:09:15.656 "uuid": "7ace3643-ddbf-406c-92ef-559a43fb4491", 00:09:15.656 "assigned_rate_limits": { 00:09:15.656 "rw_ios_per_sec": 0, 00:09:15.656 "rw_mbytes_per_sec": 0, 00:09:15.656 "r_mbytes_per_sec": 0, 00:09:15.656 "w_mbytes_per_sec": 0 00:09:15.656 }, 00:09:15.656 "claimed": true, 00:09:15.656 "claim_type": "exclusive_write", 00:09:15.657 "zoned": false, 00:09:15.657 "supported_io_types": { 00:09:15.657 "read": true, 00:09:15.657 "write": true, 00:09:15.657 "unmap": true, 00:09:15.657 "flush": true, 00:09:15.657 "reset": true, 00:09:15.657 "nvme_admin": false, 00:09:15.657 "nvme_io": false, 00:09:15.657 "nvme_io_md": false, 00:09:15.657 "write_zeroes": true, 00:09:15.657 "zcopy": true, 00:09:15.657 "get_zone_info": false, 00:09:15.657 "zone_management": false, 00:09:15.657 "zone_append": false, 00:09:15.657 "compare": false, 00:09:15.657 "compare_and_write": false, 00:09:15.657 "abort": true, 00:09:15.657 "seek_hole": false, 00:09:15.657 "seek_data": false, 00:09:15.657 "copy": true, 00:09:15.657 "nvme_iov_md": false 00:09:15.657 }, 00:09:15.657 "memory_domains": [ 00:09:15.657 { 00:09:15.657 "dma_device_id": "system", 00:09:15.657 "dma_device_type": 1 00:09:15.657 }, 00:09:15.657 { 00:09:15.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.657 "dma_device_type": 2 00:09:15.657 } 00:09:15.657 ], 00:09:15.657 "driver_specific": {} 00:09:15.657 } 00:09:15.657 ] 00:09:15.657 04:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.657 04:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:15.657 04:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:15.657 04:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:15.657 04:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:15.657 04:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.657 04:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:15.657 04:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:15.657 04:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:15.657 04:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.657 04:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.657 04:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.657 04:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.657 04:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.657 04:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.657 04:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.657 04:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.657 04:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.657 04:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.657 04:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.657 "name": "Existed_Raid", 00:09:15.657 "uuid": "0220c0f2-0dd8-4b50-8bee-5507a955a884", 00:09:15.657 "strip_size_kb": 0, 00:09:15.657 "state": "online", 00:09:15.657 "raid_level": "raid1", 00:09:15.657 "superblock": false, 00:09:15.657 "num_base_bdevs": 3, 00:09:15.657 "num_base_bdevs_discovered": 3, 00:09:15.657 "num_base_bdevs_operational": 3, 00:09:15.657 "base_bdevs_list": [ 00:09:15.657 { 00:09:15.657 "name": "BaseBdev1", 00:09:15.657 "uuid": "fdd5411f-8351-4eab-86ec-9ad4a721ccb0", 00:09:15.657 "is_configured": true, 00:09:15.657 "data_offset": 0, 00:09:15.657 "data_size": 65536 00:09:15.657 }, 00:09:15.657 { 00:09:15.657 "name": "BaseBdev2", 00:09:15.657 "uuid": "8332c724-6e7e-4897-8fc0-380280831df0", 00:09:15.657 "is_configured": true, 00:09:15.657 "data_offset": 0, 00:09:15.657 "data_size": 65536 00:09:15.657 }, 00:09:15.657 { 00:09:15.657 "name": "BaseBdev3", 00:09:15.657 "uuid": "7ace3643-ddbf-406c-92ef-559a43fb4491", 00:09:15.657 "is_configured": true, 00:09:15.657 "data_offset": 0, 00:09:15.657 "data_size": 65536 00:09:15.657 } 00:09:15.657 ] 00:09:15.657 }' 00:09:15.657 04:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.657 04:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.917 04:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:15.917 04:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:15.917 04:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:15.917 04:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:15.917 04:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:15.917 04:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:15.917 04:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:15.917 04:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.917 04:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.917 04:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:15.917 [2024-12-13 04:25:15.879118] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:15.917 04:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.917 04:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:15.917 "name": "Existed_Raid", 00:09:15.917 "aliases": [ 00:09:15.917 "0220c0f2-0dd8-4b50-8bee-5507a955a884" 00:09:15.917 ], 00:09:15.917 "product_name": "Raid Volume", 00:09:15.917 "block_size": 512, 00:09:15.917 "num_blocks": 65536, 00:09:15.917 "uuid": "0220c0f2-0dd8-4b50-8bee-5507a955a884", 00:09:15.917 "assigned_rate_limits": { 00:09:15.917 "rw_ios_per_sec": 0, 00:09:15.917 "rw_mbytes_per_sec": 0, 00:09:15.917 "r_mbytes_per_sec": 0, 00:09:15.917 "w_mbytes_per_sec": 0 00:09:15.917 }, 00:09:15.917 "claimed": false, 00:09:15.917 "zoned": false, 00:09:15.917 "supported_io_types": { 00:09:15.917 "read": true, 00:09:15.917 "write": true, 00:09:15.917 "unmap": false, 00:09:15.917 "flush": false, 00:09:15.917 "reset": true, 00:09:15.917 "nvme_admin": false, 00:09:15.917 "nvme_io": false, 00:09:15.917 "nvme_io_md": false, 00:09:15.917 "write_zeroes": true, 00:09:15.917 "zcopy": false, 00:09:15.917 "get_zone_info": false, 00:09:15.917 "zone_management": false, 00:09:15.917 "zone_append": false, 00:09:15.917 "compare": false, 00:09:15.917 "compare_and_write": false, 00:09:15.917 "abort": false, 00:09:15.917 "seek_hole": false, 00:09:15.917 "seek_data": false, 00:09:15.917 "copy": false, 00:09:15.917 "nvme_iov_md": false 00:09:15.917 }, 00:09:15.917 "memory_domains": [ 00:09:15.917 { 00:09:15.917 "dma_device_id": "system", 00:09:15.917 "dma_device_type": 1 00:09:15.917 }, 00:09:15.917 { 00:09:15.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.917 "dma_device_type": 2 00:09:15.917 }, 00:09:15.917 { 00:09:15.917 "dma_device_id": "system", 00:09:15.917 "dma_device_type": 1 00:09:15.917 }, 00:09:15.917 { 00:09:15.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.917 "dma_device_type": 2 00:09:15.917 }, 00:09:15.917 { 00:09:15.917 "dma_device_id": "system", 00:09:15.917 "dma_device_type": 1 00:09:15.917 }, 00:09:15.917 { 00:09:15.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.917 "dma_device_type": 2 00:09:15.917 } 00:09:15.917 ], 00:09:15.917 "driver_specific": { 00:09:15.917 "raid": { 00:09:15.917 "uuid": "0220c0f2-0dd8-4b50-8bee-5507a955a884", 00:09:15.917 "strip_size_kb": 0, 00:09:15.917 "state": "online", 00:09:15.917 "raid_level": "raid1", 00:09:15.917 "superblock": false, 00:09:15.917 "num_base_bdevs": 3, 00:09:15.917 "num_base_bdevs_discovered": 3, 00:09:15.917 "num_base_bdevs_operational": 3, 00:09:15.917 "base_bdevs_list": [ 00:09:15.918 { 00:09:15.918 "name": "BaseBdev1", 00:09:15.918 "uuid": "fdd5411f-8351-4eab-86ec-9ad4a721ccb0", 00:09:15.918 "is_configured": true, 00:09:15.918 "data_offset": 0, 00:09:15.918 "data_size": 65536 00:09:15.918 }, 00:09:15.918 { 00:09:15.918 "name": "BaseBdev2", 00:09:15.918 "uuid": "8332c724-6e7e-4897-8fc0-380280831df0", 00:09:15.918 "is_configured": true, 00:09:15.918 "data_offset": 0, 00:09:15.918 "data_size": 65536 00:09:15.918 }, 00:09:15.918 { 00:09:15.918 "name": "BaseBdev3", 00:09:15.918 "uuid": "7ace3643-ddbf-406c-92ef-559a43fb4491", 00:09:15.918 "is_configured": true, 00:09:15.918 "data_offset": 0, 00:09:15.918 "data_size": 65536 00:09:15.918 } 00:09:15.918 ] 00:09:15.918 } 00:09:15.918 } 00:09:15.918 }' 00:09:15.918 04:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:16.178 04:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:16.178 BaseBdev2 00:09:16.178 BaseBdev3' 00:09:16.178 04:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.178 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:16.178 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.178 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.178 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:16.178 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.178 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.178 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.178 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.178 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.178 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.178 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:16.178 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.178 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.178 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.178 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.178 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.178 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.178 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.178 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.178 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:16.178 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.178 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.178 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.178 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.178 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.178 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:16.178 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.178 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.178 [2024-12-13 04:25:16.158392] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:16.178 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.178 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:16.178 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:16.178 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:16.178 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:16.178 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:16.178 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:16.178 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.178 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:16.178 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:16.178 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:16.178 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:16.178 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.178 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.178 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.178 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.178 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.178 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.178 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.178 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.437 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.437 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.437 "name": "Existed_Raid", 00:09:16.437 "uuid": "0220c0f2-0dd8-4b50-8bee-5507a955a884", 00:09:16.437 "strip_size_kb": 0, 00:09:16.437 "state": "online", 00:09:16.437 "raid_level": "raid1", 00:09:16.437 "superblock": false, 00:09:16.437 "num_base_bdevs": 3, 00:09:16.437 "num_base_bdevs_discovered": 2, 00:09:16.437 "num_base_bdevs_operational": 2, 00:09:16.437 "base_bdevs_list": [ 00:09:16.437 { 00:09:16.437 "name": null, 00:09:16.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.437 "is_configured": false, 00:09:16.437 "data_offset": 0, 00:09:16.437 "data_size": 65536 00:09:16.437 }, 00:09:16.437 { 00:09:16.437 "name": "BaseBdev2", 00:09:16.437 "uuid": "8332c724-6e7e-4897-8fc0-380280831df0", 00:09:16.437 "is_configured": true, 00:09:16.437 "data_offset": 0, 00:09:16.437 "data_size": 65536 00:09:16.437 }, 00:09:16.437 { 00:09:16.437 "name": "BaseBdev3", 00:09:16.437 "uuid": "7ace3643-ddbf-406c-92ef-559a43fb4491", 00:09:16.437 "is_configured": true, 00:09:16.437 "data_offset": 0, 00:09:16.437 "data_size": 65536 00:09:16.437 } 00:09:16.437 ] 00:09:16.437 }' 00:09:16.437 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.437 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.697 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:16.697 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:16.697 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.697 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:16.697 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.697 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.697 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.697 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:16.697 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:16.697 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:16.697 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.697 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.697 [2024-12-13 04:25:16.654229] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:16.697 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.697 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:16.697 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:16.697 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.697 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:16.697 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.697 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.697 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.958 [2024-12-13 04:25:16.734594] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:16.958 [2024-12-13 04:25:16.734740] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:16.958 [2024-12-13 04:25:16.755854] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:16.958 [2024-12-13 04:25:16.755971] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:16.958 [2024-12-13 04:25:16.756019] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.958 BaseBdev2 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.958 [ 00:09:16.958 { 00:09:16.958 "name": "BaseBdev2", 00:09:16.958 "aliases": [ 00:09:16.958 "8187dd48-6bce-4e89-9487-a3f661e93fc6" 00:09:16.958 ], 00:09:16.958 "product_name": "Malloc disk", 00:09:16.958 "block_size": 512, 00:09:16.958 "num_blocks": 65536, 00:09:16.958 "uuid": "8187dd48-6bce-4e89-9487-a3f661e93fc6", 00:09:16.958 "assigned_rate_limits": { 00:09:16.958 "rw_ios_per_sec": 0, 00:09:16.958 "rw_mbytes_per_sec": 0, 00:09:16.958 "r_mbytes_per_sec": 0, 00:09:16.958 "w_mbytes_per_sec": 0 00:09:16.958 }, 00:09:16.958 "claimed": false, 00:09:16.958 "zoned": false, 00:09:16.958 "supported_io_types": { 00:09:16.958 "read": true, 00:09:16.958 "write": true, 00:09:16.958 "unmap": true, 00:09:16.958 "flush": true, 00:09:16.958 "reset": true, 00:09:16.958 "nvme_admin": false, 00:09:16.958 "nvme_io": false, 00:09:16.958 "nvme_io_md": false, 00:09:16.958 "write_zeroes": true, 00:09:16.958 "zcopy": true, 00:09:16.958 "get_zone_info": false, 00:09:16.958 "zone_management": false, 00:09:16.958 "zone_append": false, 00:09:16.958 "compare": false, 00:09:16.958 "compare_and_write": false, 00:09:16.958 "abort": true, 00:09:16.958 "seek_hole": false, 00:09:16.958 "seek_data": false, 00:09:16.958 "copy": true, 00:09:16.958 "nvme_iov_md": false 00:09:16.958 }, 00:09:16.958 "memory_domains": [ 00:09:16.958 { 00:09:16.958 "dma_device_id": "system", 00:09:16.958 "dma_device_type": 1 00:09:16.958 }, 00:09:16.958 { 00:09:16.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.958 "dma_device_type": 2 00:09:16.958 } 00:09:16.958 ], 00:09:16.958 "driver_specific": {} 00:09:16.958 } 00:09:16.958 ] 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.958 BaseBdev3 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.958 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.958 [ 00:09:16.958 { 00:09:16.959 "name": "BaseBdev3", 00:09:16.959 "aliases": [ 00:09:16.959 "71474d7b-bbee-4e87-bde9-d263a4d8dc45" 00:09:16.959 ], 00:09:16.959 "product_name": "Malloc disk", 00:09:16.959 "block_size": 512, 00:09:16.959 "num_blocks": 65536, 00:09:16.959 "uuid": "71474d7b-bbee-4e87-bde9-d263a4d8dc45", 00:09:16.959 "assigned_rate_limits": { 00:09:16.959 "rw_ios_per_sec": 0, 00:09:16.959 "rw_mbytes_per_sec": 0, 00:09:16.959 "r_mbytes_per_sec": 0, 00:09:16.959 "w_mbytes_per_sec": 0 00:09:16.959 }, 00:09:16.959 "claimed": false, 00:09:16.959 "zoned": false, 00:09:16.959 "supported_io_types": { 00:09:16.959 "read": true, 00:09:16.959 "write": true, 00:09:16.959 "unmap": true, 00:09:16.959 "flush": true, 00:09:16.959 "reset": true, 00:09:16.959 "nvme_admin": false, 00:09:16.959 "nvme_io": false, 00:09:16.959 "nvme_io_md": false, 00:09:16.959 "write_zeroes": true, 00:09:16.959 "zcopy": true, 00:09:16.959 "get_zone_info": false, 00:09:16.959 "zone_management": false, 00:09:16.959 "zone_append": false, 00:09:16.959 "compare": false, 00:09:16.959 "compare_and_write": false, 00:09:16.959 "abort": true, 00:09:16.959 "seek_hole": false, 00:09:16.959 "seek_data": false, 00:09:16.959 "copy": true, 00:09:16.959 "nvme_iov_md": false 00:09:16.959 }, 00:09:16.959 "memory_domains": [ 00:09:16.959 { 00:09:16.959 "dma_device_id": "system", 00:09:16.959 "dma_device_type": 1 00:09:16.959 }, 00:09:16.959 { 00:09:16.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.959 "dma_device_type": 2 00:09:16.959 } 00:09:16.959 ], 00:09:16.959 "driver_specific": {} 00:09:16.959 } 00:09:16.959 ] 00:09:16.959 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.959 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:16.959 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:16.959 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:16.959 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:16.959 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.959 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.959 [2024-12-13 04:25:16.933565] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:16.959 [2024-12-13 04:25:16.933613] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:16.959 [2024-12-13 04:25:16.933637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:16.959 [2024-12-13 04:25:16.935750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:16.959 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.959 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:16.959 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.959 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:16.959 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:16.959 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:16.959 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.959 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.959 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.959 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.959 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.959 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.959 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.959 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.959 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.959 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.218 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.218 "name": "Existed_Raid", 00:09:17.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.218 "strip_size_kb": 0, 00:09:17.218 "state": "configuring", 00:09:17.218 "raid_level": "raid1", 00:09:17.218 "superblock": false, 00:09:17.218 "num_base_bdevs": 3, 00:09:17.218 "num_base_bdevs_discovered": 2, 00:09:17.218 "num_base_bdevs_operational": 3, 00:09:17.218 "base_bdevs_list": [ 00:09:17.218 { 00:09:17.218 "name": "BaseBdev1", 00:09:17.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.218 "is_configured": false, 00:09:17.218 "data_offset": 0, 00:09:17.218 "data_size": 0 00:09:17.218 }, 00:09:17.218 { 00:09:17.218 "name": "BaseBdev2", 00:09:17.218 "uuid": "8187dd48-6bce-4e89-9487-a3f661e93fc6", 00:09:17.218 "is_configured": true, 00:09:17.218 "data_offset": 0, 00:09:17.218 "data_size": 65536 00:09:17.218 }, 00:09:17.218 { 00:09:17.218 "name": "BaseBdev3", 00:09:17.218 "uuid": "71474d7b-bbee-4e87-bde9-d263a4d8dc45", 00:09:17.218 "is_configured": true, 00:09:17.218 "data_offset": 0, 00:09:17.218 "data_size": 65536 00:09:17.218 } 00:09:17.218 ] 00:09:17.218 }' 00:09:17.218 04:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.218 04:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.478 04:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:17.478 04:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.478 04:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.478 [2024-12-13 04:25:17.396716] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:17.478 04:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.478 04:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:17.478 04:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.478 04:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.478 04:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:17.478 04:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:17.478 04:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.478 04:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.478 04:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.478 04:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.478 04:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.478 04:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.478 04:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.478 04:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.478 04:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.478 04:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.478 04:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.478 "name": "Existed_Raid", 00:09:17.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.478 "strip_size_kb": 0, 00:09:17.478 "state": "configuring", 00:09:17.478 "raid_level": "raid1", 00:09:17.478 "superblock": false, 00:09:17.478 "num_base_bdevs": 3, 00:09:17.478 "num_base_bdevs_discovered": 1, 00:09:17.478 "num_base_bdevs_operational": 3, 00:09:17.478 "base_bdevs_list": [ 00:09:17.478 { 00:09:17.478 "name": "BaseBdev1", 00:09:17.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.478 "is_configured": false, 00:09:17.478 "data_offset": 0, 00:09:17.478 "data_size": 0 00:09:17.478 }, 00:09:17.478 { 00:09:17.478 "name": null, 00:09:17.478 "uuid": "8187dd48-6bce-4e89-9487-a3f661e93fc6", 00:09:17.478 "is_configured": false, 00:09:17.478 "data_offset": 0, 00:09:17.478 "data_size": 65536 00:09:17.478 }, 00:09:17.478 { 00:09:17.478 "name": "BaseBdev3", 00:09:17.478 "uuid": "71474d7b-bbee-4e87-bde9-d263a4d8dc45", 00:09:17.478 "is_configured": true, 00:09:17.478 "data_offset": 0, 00:09:17.478 "data_size": 65536 00:09:17.478 } 00:09:17.478 ] 00:09:17.478 }' 00:09:17.478 04:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.478 04:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.049 04:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.049 04:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:18.049 04:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.049 04:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.049 04:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.049 04:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:18.049 04:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:18.049 04:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.049 04:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.049 [2024-12-13 04:25:17.892991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:18.049 BaseBdev1 00:09:18.049 04:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.049 04:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:18.049 04:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:18.049 04:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:18.049 04:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:18.049 04:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:18.049 04:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:18.049 04:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:18.049 04:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.049 04:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.049 04:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.049 04:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:18.049 04:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.049 04:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.049 [ 00:09:18.049 { 00:09:18.049 "name": "BaseBdev1", 00:09:18.049 "aliases": [ 00:09:18.049 "5c26fa66-eb3c-4d57-a31a-61f746c26cef" 00:09:18.049 ], 00:09:18.049 "product_name": "Malloc disk", 00:09:18.049 "block_size": 512, 00:09:18.049 "num_blocks": 65536, 00:09:18.049 "uuid": "5c26fa66-eb3c-4d57-a31a-61f746c26cef", 00:09:18.049 "assigned_rate_limits": { 00:09:18.049 "rw_ios_per_sec": 0, 00:09:18.049 "rw_mbytes_per_sec": 0, 00:09:18.049 "r_mbytes_per_sec": 0, 00:09:18.049 "w_mbytes_per_sec": 0 00:09:18.049 }, 00:09:18.049 "claimed": true, 00:09:18.049 "claim_type": "exclusive_write", 00:09:18.049 "zoned": false, 00:09:18.049 "supported_io_types": { 00:09:18.049 "read": true, 00:09:18.049 "write": true, 00:09:18.049 "unmap": true, 00:09:18.049 "flush": true, 00:09:18.049 "reset": true, 00:09:18.049 "nvme_admin": false, 00:09:18.049 "nvme_io": false, 00:09:18.049 "nvme_io_md": false, 00:09:18.049 "write_zeroes": true, 00:09:18.049 "zcopy": true, 00:09:18.049 "get_zone_info": false, 00:09:18.049 "zone_management": false, 00:09:18.049 "zone_append": false, 00:09:18.049 "compare": false, 00:09:18.049 "compare_and_write": false, 00:09:18.049 "abort": true, 00:09:18.049 "seek_hole": false, 00:09:18.049 "seek_data": false, 00:09:18.049 "copy": true, 00:09:18.049 "nvme_iov_md": false 00:09:18.049 }, 00:09:18.049 "memory_domains": [ 00:09:18.049 { 00:09:18.049 "dma_device_id": "system", 00:09:18.049 "dma_device_type": 1 00:09:18.049 }, 00:09:18.049 { 00:09:18.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.049 "dma_device_type": 2 00:09:18.049 } 00:09:18.049 ], 00:09:18.049 "driver_specific": {} 00:09:18.049 } 00:09:18.049 ] 00:09:18.049 04:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.049 04:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:18.049 04:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:18.049 04:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.049 04:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.049 04:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:18.049 04:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:18.049 04:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.049 04:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.049 04:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.049 04:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.049 04:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.049 04:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.049 04:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.049 04:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.049 04:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.049 04:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.049 04:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.049 "name": "Existed_Raid", 00:09:18.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.049 "strip_size_kb": 0, 00:09:18.049 "state": "configuring", 00:09:18.049 "raid_level": "raid1", 00:09:18.049 "superblock": false, 00:09:18.049 "num_base_bdevs": 3, 00:09:18.049 "num_base_bdevs_discovered": 2, 00:09:18.049 "num_base_bdevs_operational": 3, 00:09:18.049 "base_bdevs_list": [ 00:09:18.049 { 00:09:18.049 "name": "BaseBdev1", 00:09:18.049 "uuid": "5c26fa66-eb3c-4d57-a31a-61f746c26cef", 00:09:18.049 "is_configured": true, 00:09:18.049 "data_offset": 0, 00:09:18.049 "data_size": 65536 00:09:18.049 }, 00:09:18.049 { 00:09:18.049 "name": null, 00:09:18.049 "uuid": "8187dd48-6bce-4e89-9487-a3f661e93fc6", 00:09:18.049 "is_configured": false, 00:09:18.049 "data_offset": 0, 00:09:18.049 "data_size": 65536 00:09:18.049 }, 00:09:18.049 { 00:09:18.049 "name": "BaseBdev3", 00:09:18.049 "uuid": "71474d7b-bbee-4e87-bde9-d263a4d8dc45", 00:09:18.049 "is_configured": true, 00:09:18.049 "data_offset": 0, 00:09:18.049 "data_size": 65536 00:09:18.049 } 00:09:18.049 ] 00:09:18.049 }' 00:09:18.049 04:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.049 04:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.619 04:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:18.619 04:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.619 04:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.619 04:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.619 04:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.619 04:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:18.619 04:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:18.619 04:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.619 04:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.619 [2024-12-13 04:25:18.428157] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:18.619 04:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.619 04:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:18.619 04:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.619 04:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.619 04:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:18.619 04:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:18.619 04:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.619 04:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.619 04:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.619 04:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.619 04:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.619 04:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.620 04:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.620 04:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.620 04:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.620 04:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.620 04:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.620 "name": "Existed_Raid", 00:09:18.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.620 "strip_size_kb": 0, 00:09:18.620 "state": "configuring", 00:09:18.620 "raid_level": "raid1", 00:09:18.620 "superblock": false, 00:09:18.620 "num_base_bdevs": 3, 00:09:18.620 "num_base_bdevs_discovered": 1, 00:09:18.620 "num_base_bdevs_operational": 3, 00:09:18.620 "base_bdevs_list": [ 00:09:18.620 { 00:09:18.620 "name": "BaseBdev1", 00:09:18.620 "uuid": "5c26fa66-eb3c-4d57-a31a-61f746c26cef", 00:09:18.620 "is_configured": true, 00:09:18.620 "data_offset": 0, 00:09:18.620 "data_size": 65536 00:09:18.620 }, 00:09:18.620 { 00:09:18.620 "name": null, 00:09:18.620 "uuid": "8187dd48-6bce-4e89-9487-a3f661e93fc6", 00:09:18.620 "is_configured": false, 00:09:18.620 "data_offset": 0, 00:09:18.620 "data_size": 65536 00:09:18.620 }, 00:09:18.620 { 00:09:18.620 "name": null, 00:09:18.620 "uuid": "71474d7b-bbee-4e87-bde9-d263a4d8dc45", 00:09:18.620 "is_configured": false, 00:09:18.620 "data_offset": 0, 00:09:18.620 "data_size": 65536 00:09:18.620 } 00:09:18.620 ] 00:09:18.620 }' 00:09:18.620 04:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.620 04:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.880 04:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.880 04:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.880 04:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:18.880 04:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.140 04:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.140 04:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:19.140 04:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:19.140 04:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.140 04:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.140 [2024-12-13 04:25:18.943289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:19.140 04:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.140 04:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:19.140 04:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.140 04:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.140 04:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:19.140 04:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:19.140 04:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.140 04:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.140 04:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.140 04:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.140 04:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.140 04:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.140 04:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.140 04:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.140 04:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.140 04:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.140 04:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.140 "name": "Existed_Raid", 00:09:19.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.140 "strip_size_kb": 0, 00:09:19.140 "state": "configuring", 00:09:19.140 "raid_level": "raid1", 00:09:19.140 "superblock": false, 00:09:19.140 "num_base_bdevs": 3, 00:09:19.140 "num_base_bdevs_discovered": 2, 00:09:19.140 "num_base_bdevs_operational": 3, 00:09:19.140 "base_bdevs_list": [ 00:09:19.140 { 00:09:19.140 "name": "BaseBdev1", 00:09:19.140 "uuid": "5c26fa66-eb3c-4d57-a31a-61f746c26cef", 00:09:19.140 "is_configured": true, 00:09:19.140 "data_offset": 0, 00:09:19.140 "data_size": 65536 00:09:19.140 }, 00:09:19.140 { 00:09:19.140 "name": null, 00:09:19.140 "uuid": "8187dd48-6bce-4e89-9487-a3f661e93fc6", 00:09:19.140 "is_configured": false, 00:09:19.140 "data_offset": 0, 00:09:19.140 "data_size": 65536 00:09:19.140 }, 00:09:19.140 { 00:09:19.140 "name": "BaseBdev3", 00:09:19.140 "uuid": "71474d7b-bbee-4e87-bde9-d263a4d8dc45", 00:09:19.140 "is_configured": true, 00:09:19.140 "data_offset": 0, 00:09:19.140 "data_size": 65536 00:09:19.140 } 00:09:19.140 ] 00:09:19.140 }' 00:09:19.140 04:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.140 04:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.400 04:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.400 04:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.400 04:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.400 04:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:19.400 04:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.400 04:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:19.400 04:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:19.400 04:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.400 04:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.661 [2024-12-13 04:25:19.418529] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:19.661 04:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.661 04:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:19.661 04:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.661 04:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.661 04:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:19.661 04:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:19.661 04:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.661 04:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.661 04:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.661 04:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.661 04:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.661 04:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.661 04:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.661 04:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.661 04:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.661 04:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.661 04:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.661 "name": "Existed_Raid", 00:09:19.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.661 "strip_size_kb": 0, 00:09:19.661 "state": "configuring", 00:09:19.661 "raid_level": "raid1", 00:09:19.661 "superblock": false, 00:09:19.661 "num_base_bdevs": 3, 00:09:19.661 "num_base_bdevs_discovered": 1, 00:09:19.661 "num_base_bdevs_operational": 3, 00:09:19.661 "base_bdevs_list": [ 00:09:19.661 { 00:09:19.661 "name": null, 00:09:19.661 "uuid": "5c26fa66-eb3c-4d57-a31a-61f746c26cef", 00:09:19.661 "is_configured": false, 00:09:19.661 "data_offset": 0, 00:09:19.661 "data_size": 65536 00:09:19.661 }, 00:09:19.661 { 00:09:19.661 "name": null, 00:09:19.661 "uuid": "8187dd48-6bce-4e89-9487-a3f661e93fc6", 00:09:19.661 "is_configured": false, 00:09:19.661 "data_offset": 0, 00:09:19.661 "data_size": 65536 00:09:19.661 }, 00:09:19.661 { 00:09:19.661 "name": "BaseBdev3", 00:09:19.661 "uuid": "71474d7b-bbee-4e87-bde9-d263a4d8dc45", 00:09:19.661 "is_configured": true, 00:09:19.661 "data_offset": 0, 00:09:19.661 "data_size": 65536 00:09:19.661 } 00:09:19.661 ] 00:09:19.661 }' 00:09:19.661 04:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.661 04:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.921 04:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.921 04:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.921 04:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.921 04:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:19.921 04:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.921 04:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:19.921 04:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:19.921 04:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.921 04:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.921 [2024-12-13 04:25:19.933462] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:20.181 04:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.181 04:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:20.181 04:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.181 04:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.181 04:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:20.181 04:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:20.181 04:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.181 04:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.181 04:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.181 04:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.181 04:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.181 04:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.181 04:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.181 04:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.181 04:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.181 04:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.181 04:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.181 "name": "Existed_Raid", 00:09:20.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.181 "strip_size_kb": 0, 00:09:20.181 "state": "configuring", 00:09:20.181 "raid_level": "raid1", 00:09:20.181 "superblock": false, 00:09:20.181 "num_base_bdevs": 3, 00:09:20.181 "num_base_bdevs_discovered": 2, 00:09:20.181 "num_base_bdevs_operational": 3, 00:09:20.181 "base_bdevs_list": [ 00:09:20.182 { 00:09:20.182 "name": null, 00:09:20.182 "uuid": "5c26fa66-eb3c-4d57-a31a-61f746c26cef", 00:09:20.182 "is_configured": false, 00:09:20.182 "data_offset": 0, 00:09:20.182 "data_size": 65536 00:09:20.182 }, 00:09:20.182 { 00:09:20.182 "name": "BaseBdev2", 00:09:20.182 "uuid": "8187dd48-6bce-4e89-9487-a3f661e93fc6", 00:09:20.182 "is_configured": true, 00:09:20.182 "data_offset": 0, 00:09:20.182 "data_size": 65536 00:09:20.182 }, 00:09:20.182 { 00:09:20.182 "name": "BaseBdev3", 00:09:20.182 "uuid": "71474d7b-bbee-4e87-bde9-d263a4d8dc45", 00:09:20.182 "is_configured": true, 00:09:20.182 "data_offset": 0, 00:09:20.182 "data_size": 65536 00:09:20.182 } 00:09:20.182 ] 00:09:20.182 }' 00:09:20.182 04:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.182 04:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.441 04:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.441 04:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:20.441 04:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.441 04:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.441 04:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.441 04:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:20.441 04:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.441 04:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:20.441 04:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.441 04:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.441 04:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.701 04:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5c26fa66-eb3c-4d57-a31a-61f746c26cef 00:09:20.701 04:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.701 04:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.701 [2024-12-13 04:25:20.505186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:20.701 [2024-12-13 04:25:20.505242] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:20.701 [2024-12-13 04:25:20.505250] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:20.701 [2024-12-13 04:25:20.505548] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:09:20.701 [2024-12-13 04:25:20.505694] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:20.701 [2024-12-13 04:25:20.505744] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:09:20.701 [2024-12-13 04:25:20.505959] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:20.701 NewBaseBdev 00:09:20.701 04:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.701 04:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:20.701 04:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:20.701 04:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:20.701 04:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:20.701 04:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:20.701 04:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:20.701 04:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:20.701 04:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.701 04:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.701 04:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.701 04:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:20.701 04:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.701 04:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.701 [ 00:09:20.701 { 00:09:20.701 "name": "NewBaseBdev", 00:09:20.701 "aliases": [ 00:09:20.701 "5c26fa66-eb3c-4d57-a31a-61f746c26cef" 00:09:20.701 ], 00:09:20.701 "product_name": "Malloc disk", 00:09:20.701 "block_size": 512, 00:09:20.701 "num_blocks": 65536, 00:09:20.701 "uuid": "5c26fa66-eb3c-4d57-a31a-61f746c26cef", 00:09:20.701 "assigned_rate_limits": { 00:09:20.701 "rw_ios_per_sec": 0, 00:09:20.701 "rw_mbytes_per_sec": 0, 00:09:20.701 "r_mbytes_per_sec": 0, 00:09:20.701 "w_mbytes_per_sec": 0 00:09:20.701 }, 00:09:20.701 "claimed": true, 00:09:20.701 "claim_type": "exclusive_write", 00:09:20.701 "zoned": false, 00:09:20.701 "supported_io_types": { 00:09:20.701 "read": true, 00:09:20.701 "write": true, 00:09:20.701 "unmap": true, 00:09:20.701 "flush": true, 00:09:20.701 "reset": true, 00:09:20.701 "nvme_admin": false, 00:09:20.701 "nvme_io": false, 00:09:20.701 "nvme_io_md": false, 00:09:20.701 "write_zeroes": true, 00:09:20.701 "zcopy": true, 00:09:20.701 "get_zone_info": false, 00:09:20.701 "zone_management": false, 00:09:20.701 "zone_append": false, 00:09:20.701 "compare": false, 00:09:20.701 "compare_and_write": false, 00:09:20.701 "abort": true, 00:09:20.701 "seek_hole": false, 00:09:20.701 "seek_data": false, 00:09:20.701 "copy": true, 00:09:20.701 "nvme_iov_md": false 00:09:20.701 }, 00:09:20.701 "memory_domains": [ 00:09:20.701 { 00:09:20.701 "dma_device_id": "system", 00:09:20.701 "dma_device_type": 1 00:09:20.701 }, 00:09:20.701 { 00:09:20.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.701 "dma_device_type": 2 00:09:20.701 } 00:09:20.701 ], 00:09:20.701 "driver_specific": {} 00:09:20.701 } 00:09:20.701 ] 00:09:20.701 04:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.701 04:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:20.701 04:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:20.701 04:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.701 04:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:20.701 04:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:20.701 04:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:20.701 04:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.701 04:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.701 04:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.701 04:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.701 04:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.701 04:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.701 04:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.701 04:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.701 04:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.701 04:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.701 04:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.701 "name": "Existed_Raid", 00:09:20.701 "uuid": "bdb0146e-b039-419d-b02e-9696b854189e", 00:09:20.701 "strip_size_kb": 0, 00:09:20.701 "state": "online", 00:09:20.701 "raid_level": "raid1", 00:09:20.701 "superblock": false, 00:09:20.701 "num_base_bdevs": 3, 00:09:20.701 "num_base_bdevs_discovered": 3, 00:09:20.701 "num_base_bdevs_operational": 3, 00:09:20.701 "base_bdevs_list": [ 00:09:20.701 { 00:09:20.701 "name": "NewBaseBdev", 00:09:20.701 "uuid": "5c26fa66-eb3c-4d57-a31a-61f746c26cef", 00:09:20.701 "is_configured": true, 00:09:20.701 "data_offset": 0, 00:09:20.701 "data_size": 65536 00:09:20.701 }, 00:09:20.701 { 00:09:20.701 "name": "BaseBdev2", 00:09:20.701 "uuid": "8187dd48-6bce-4e89-9487-a3f661e93fc6", 00:09:20.701 "is_configured": true, 00:09:20.701 "data_offset": 0, 00:09:20.701 "data_size": 65536 00:09:20.701 }, 00:09:20.701 { 00:09:20.701 "name": "BaseBdev3", 00:09:20.701 "uuid": "71474d7b-bbee-4e87-bde9-d263a4d8dc45", 00:09:20.701 "is_configured": true, 00:09:20.701 "data_offset": 0, 00:09:20.701 "data_size": 65536 00:09:20.701 } 00:09:20.701 ] 00:09:20.701 }' 00:09:20.701 04:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.701 04:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.270 04:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:21.270 04:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:21.270 04:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:21.270 04:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:21.270 04:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:21.270 04:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:21.270 04:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:21.270 04:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.270 04:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.270 04:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:21.270 [2024-12-13 04:25:21.012671] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:21.270 04:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.270 04:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:21.270 "name": "Existed_Raid", 00:09:21.270 "aliases": [ 00:09:21.270 "bdb0146e-b039-419d-b02e-9696b854189e" 00:09:21.270 ], 00:09:21.270 "product_name": "Raid Volume", 00:09:21.270 "block_size": 512, 00:09:21.271 "num_blocks": 65536, 00:09:21.271 "uuid": "bdb0146e-b039-419d-b02e-9696b854189e", 00:09:21.271 "assigned_rate_limits": { 00:09:21.271 "rw_ios_per_sec": 0, 00:09:21.271 "rw_mbytes_per_sec": 0, 00:09:21.271 "r_mbytes_per_sec": 0, 00:09:21.271 "w_mbytes_per_sec": 0 00:09:21.271 }, 00:09:21.271 "claimed": false, 00:09:21.271 "zoned": false, 00:09:21.271 "supported_io_types": { 00:09:21.271 "read": true, 00:09:21.271 "write": true, 00:09:21.271 "unmap": false, 00:09:21.271 "flush": false, 00:09:21.271 "reset": true, 00:09:21.271 "nvme_admin": false, 00:09:21.271 "nvme_io": false, 00:09:21.271 "nvme_io_md": false, 00:09:21.271 "write_zeroes": true, 00:09:21.271 "zcopy": false, 00:09:21.271 "get_zone_info": false, 00:09:21.271 "zone_management": false, 00:09:21.271 "zone_append": false, 00:09:21.271 "compare": false, 00:09:21.271 "compare_and_write": false, 00:09:21.271 "abort": false, 00:09:21.271 "seek_hole": false, 00:09:21.271 "seek_data": false, 00:09:21.271 "copy": false, 00:09:21.271 "nvme_iov_md": false 00:09:21.271 }, 00:09:21.271 "memory_domains": [ 00:09:21.271 { 00:09:21.271 "dma_device_id": "system", 00:09:21.271 "dma_device_type": 1 00:09:21.271 }, 00:09:21.271 { 00:09:21.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.271 "dma_device_type": 2 00:09:21.271 }, 00:09:21.271 { 00:09:21.271 "dma_device_id": "system", 00:09:21.271 "dma_device_type": 1 00:09:21.271 }, 00:09:21.271 { 00:09:21.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.271 "dma_device_type": 2 00:09:21.271 }, 00:09:21.271 { 00:09:21.271 "dma_device_id": "system", 00:09:21.271 "dma_device_type": 1 00:09:21.271 }, 00:09:21.271 { 00:09:21.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.271 "dma_device_type": 2 00:09:21.271 } 00:09:21.271 ], 00:09:21.271 "driver_specific": { 00:09:21.271 "raid": { 00:09:21.271 "uuid": "bdb0146e-b039-419d-b02e-9696b854189e", 00:09:21.271 "strip_size_kb": 0, 00:09:21.271 "state": "online", 00:09:21.271 "raid_level": "raid1", 00:09:21.271 "superblock": false, 00:09:21.271 "num_base_bdevs": 3, 00:09:21.271 "num_base_bdevs_discovered": 3, 00:09:21.271 "num_base_bdevs_operational": 3, 00:09:21.271 "base_bdevs_list": [ 00:09:21.271 { 00:09:21.271 "name": "NewBaseBdev", 00:09:21.271 "uuid": "5c26fa66-eb3c-4d57-a31a-61f746c26cef", 00:09:21.271 "is_configured": true, 00:09:21.271 "data_offset": 0, 00:09:21.271 "data_size": 65536 00:09:21.271 }, 00:09:21.271 { 00:09:21.271 "name": "BaseBdev2", 00:09:21.271 "uuid": "8187dd48-6bce-4e89-9487-a3f661e93fc6", 00:09:21.271 "is_configured": true, 00:09:21.271 "data_offset": 0, 00:09:21.271 "data_size": 65536 00:09:21.271 }, 00:09:21.271 { 00:09:21.271 "name": "BaseBdev3", 00:09:21.271 "uuid": "71474d7b-bbee-4e87-bde9-d263a4d8dc45", 00:09:21.271 "is_configured": true, 00:09:21.271 "data_offset": 0, 00:09:21.271 "data_size": 65536 00:09:21.271 } 00:09:21.271 ] 00:09:21.271 } 00:09:21.271 } 00:09:21.271 }' 00:09:21.271 04:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:21.271 04:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:21.271 BaseBdev2 00:09:21.271 BaseBdev3' 00:09:21.271 04:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.271 04:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:21.271 04:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.271 04:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.271 04:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:21.271 04:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.271 04:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.271 04:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.271 04:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.271 04:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.271 04:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.271 04:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.271 04:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:21.271 04:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.271 04:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.271 04:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.271 04:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.271 04:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.271 04:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.271 04:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.271 04:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:21.271 04:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.271 04:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.271 04:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.271 04:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.271 04:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.271 04:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:21.271 04:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.271 04:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.271 [2024-12-13 04:25:21.239966] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:21.271 [2024-12-13 04:25:21.239996] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:21.271 [2024-12-13 04:25:21.240066] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:21.271 [2024-12-13 04:25:21.240349] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:21.271 [2024-12-13 04:25:21.240360] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:09:21.271 04:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.271 04:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80133 00:09:21.271 04:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80133 ']' 00:09:21.271 04:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80133 00:09:21.271 04:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:21.271 04:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:21.271 04:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80133 00:09:21.271 killing process with pid 80133 00:09:21.271 04:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:21.271 04:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:21.271 04:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80133' 00:09:21.271 04:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 80133 00:09:21.271 [2024-12-13 04:25:21.279525] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:21.271 04:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 80133 00:09:21.530 [2024-12-13 04:25:21.338743] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:21.790 ************************************ 00:09:21.790 END TEST raid_state_function_test 00:09:21.790 04:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:21.790 00:09:21.790 real 0m9.040s 00:09:21.790 user 0m15.127s 00:09:21.790 sys 0m1.947s 00:09:21.790 04:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.790 04:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.790 ************************************ 00:09:21.790 04:25:21 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:09:21.790 04:25:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:21.790 04:25:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.790 04:25:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:21.790 ************************************ 00:09:21.790 START TEST raid_state_function_test_sb 00:09:21.790 ************************************ 00:09:21.790 04:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:09:21.791 04:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:21.791 04:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:21.791 04:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:21.791 04:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:21.791 04:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:21.791 04:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:21.791 04:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:21.791 04:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:21.791 04:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:21.791 04:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:21.791 04:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:21.791 04:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:21.791 04:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:21.791 04:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:21.791 04:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:21.791 04:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:21.791 04:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:21.791 04:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:21.791 04:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:21.791 04:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:21.791 04:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:21.791 04:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:21.791 04:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:21.791 04:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:21.791 04:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:21.791 04:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80733 00:09:21.791 04:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:21.791 Process raid pid: 80733 00:09:21.791 04:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80733' 00:09:21.791 04:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80733 00:09:21.791 04:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80733 ']' 00:09:21.791 04:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.791 04:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:21.791 04:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.791 04:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:21.791 04:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.051 [2024-12-13 04:25:21.842713] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:22.051 [2024-12-13 04:25:21.842936] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:22.051 [2024-12-13 04:25:21.998924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.051 [2024-12-13 04:25:22.037457] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.310 [2024-12-13 04:25:22.113106] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:22.310 [2024-12-13 04:25:22.113144] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:22.880 04:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:22.880 04:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:22.880 04:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:22.880 04:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.880 04:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.880 [2024-12-13 04:25:22.654842] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:22.880 [2024-12-13 04:25:22.654910] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:22.880 [2024-12-13 04:25:22.654922] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:22.880 [2024-12-13 04:25:22.654933] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:22.880 [2024-12-13 04:25:22.654940] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:22.880 [2024-12-13 04:25:22.654952] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:22.880 04:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.880 04:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:22.880 04:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.880 04:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.880 04:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:22.880 04:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:22.880 04:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.880 04:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.880 04:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.880 04:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.880 04:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.880 04:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.881 04:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.881 04:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.881 04:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.881 04:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.881 04:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.881 "name": "Existed_Raid", 00:09:22.881 "uuid": "cfc71e04-2a51-40f2-8ec3-bb269f1e04a2", 00:09:22.881 "strip_size_kb": 0, 00:09:22.881 "state": "configuring", 00:09:22.881 "raid_level": "raid1", 00:09:22.881 "superblock": true, 00:09:22.881 "num_base_bdevs": 3, 00:09:22.881 "num_base_bdevs_discovered": 0, 00:09:22.881 "num_base_bdevs_operational": 3, 00:09:22.881 "base_bdevs_list": [ 00:09:22.881 { 00:09:22.881 "name": "BaseBdev1", 00:09:22.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.881 "is_configured": false, 00:09:22.881 "data_offset": 0, 00:09:22.881 "data_size": 0 00:09:22.881 }, 00:09:22.881 { 00:09:22.881 "name": "BaseBdev2", 00:09:22.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.881 "is_configured": false, 00:09:22.881 "data_offset": 0, 00:09:22.881 "data_size": 0 00:09:22.881 }, 00:09:22.881 { 00:09:22.881 "name": "BaseBdev3", 00:09:22.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.881 "is_configured": false, 00:09:22.881 "data_offset": 0, 00:09:22.881 "data_size": 0 00:09:22.881 } 00:09:22.881 ] 00:09:22.881 }' 00:09:22.881 04:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.881 04:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.141 04:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:23.141 04:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.141 04:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.141 [2024-12-13 04:25:23.086036] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:23.141 [2024-12-13 04:25:23.086125] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:09:23.141 04:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.141 04:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:23.141 04:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.141 04:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.142 [2024-12-13 04:25:23.098043] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:23.142 [2024-12-13 04:25:23.098122] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:23.142 [2024-12-13 04:25:23.098148] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:23.142 [2024-12-13 04:25:23.098170] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:23.142 [2024-12-13 04:25:23.098187] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:23.142 [2024-12-13 04:25:23.098208] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:23.142 04:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.142 04:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:23.142 04:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.142 04:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.142 [2024-12-13 04:25:23.124803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:23.142 BaseBdev1 00:09:23.142 04:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.142 04:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:23.142 04:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:23.142 04:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:23.142 04:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:23.142 04:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:23.142 04:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:23.142 04:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:23.142 04:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.142 04:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.142 04:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.142 04:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:23.142 04:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.142 04:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.402 [ 00:09:23.402 { 00:09:23.402 "name": "BaseBdev1", 00:09:23.402 "aliases": [ 00:09:23.402 "abec52a6-b10b-457d-8059-b22afcb2ca2c" 00:09:23.402 ], 00:09:23.402 "product_name": "Malloc disk", 00:09:23.402 "block_size": 512, 00:09:23.402 "num_blocks": 65536, 00:09:23.402 "uuid": "abec52a6-b10b-457d-8059-b22afcb2ca2c", 00:09:23.402 "assigned_rate_limits": { 00:09:23.402 "rw_ios_per_sec": 0, 00:09:23.402 "rw_mbytes_per_sec": 0, 00:09:23.402 "r_mbytes_per_sec": 0, 00:09:23.402 "w_mbytes_per_sec": 0 00:09:23.402 }, 00:09:23.402 "claimed": true, 00:09:23.402 "claim_type": "exclusive_write", 00:09:23.402 "zoned": false, 00:09:23.402 "supported_io_types": { 00:09:23.402 "read": true, 00:09:23.402 "write": true, 00:09:23.402 "unmap": true, 00:09:23.402 "flush": true, 00:09:23.402 "reset": true, 00:09:23.402 "nvme_admin": false, 00:09:23.402 "nvme_io": false, 00:09:23.402 "nvme_io_md": false, 00:09:23.402 "write_zeroes": true, 00:09:23.402 "zcopy": true, 00:09:23.402 "get_zone_info": false, 00:09:23.402 "zone_management": false, 00:09:23.402 "zone_append": false, 00:09:23.402 "compare": false, 00:09:23.402 "compare_and_write": false, 00:09:23.402 "abort": true, 00:09:23.402 "seek_hole": false, 00:09:23.402 "seek_data": false, 00:09:23.402 "copy": true, 00:09:23.402 "nvme_iov_md": false 00:09:23.402 }, 00:09:23.402 "memory_domains": [ 00:09:23.402 { 00:09:23.402 "dma_device_id": "system", 00:09:23.402 "dma_device_type": 1 00:09:23.402 }, 00:09:23.402 { 00:09:23.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.402 "dma_device_type": 2 00:09:23.402 } 00:09:23.402 ], 00:09:23.402 "driver_specific": {} 00:09:23.402 } 00:09:23.402 ] 00:09:23.402 04:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.402 04:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:23.402 04:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:23.402 04:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.402 04:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.402 04:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:23.402 04:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:23.402 04:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.402 04:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.402 04:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.402 04:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.402 04:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.402 04:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.402 04:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.402 04:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.402 04:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.402 04:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.402 04:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.402 "name": "Existed_Raid", 00:09:23.402 "uuid": "383bb883-c125-4d25-ba54-90950cc86f62", 00:09:23.402 "strip_size_kb": 0, 00:09:23.402 "state": "configuring", 00:09:23.402 "raid_level": "raid1", 00:09:23.402 "superblock": true, 00:09:23.402 "num_base_bdevs": 3, 00:09:23.402 "num_base_bdevs_discovered": 1, 00:09:23.402 "num_base_bdevs_operational": 3, 00:09:23.402 "base_bdevs_list": [ 00:09:23.402 { 00:09:23.402 "name": "BaseBdev1", 00:09:23.402 "uuid": "abec52a6-b10b-457d-8059-b22afcb2ca2c", 00:09:23.402 "is_configured": true, 00:09:23.402 "data_offset": 2048, 00:09:23.402 "data_size": 63488 00:09:23.402 }, 00:09:23.402 { 00:09:23.402 "name": "BaseBdev2", 00:09:23.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.402 "is_configured": false, 00:09:23.402 "data_offset": 0, 00:09:23.402 "data_size": 0 00:09:23.402 }, 00:09:23.402 { 00:09:23.402 "name": "BaseBdev3", 00:09:23.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.402 "is_configured": false, 00:09:23.402 "data_offset": 0, 00:09:23.402 "data_size": 0 00:09:23.402 } 00:09:23.402 ] 00:09:23.402 }' 00:09:23.402 04:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.402 04:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.662 04:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:23.662 04:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.662 04:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.662 [2024-12-13 04:25:23.572061] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:23.662 [2024-12-13 04:25:23.572108] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:09:23.662 04:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.662 04:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:23.662 04:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.662 04:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.662 [2024-12-13 04:25:23.580084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:23.662 [2024-12-13 04:25:23.582206] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:23.662 [2024-12-13 04:25:23.582242] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:23.662 [2024-12-13 04:25:23.582251] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:23.662 [2024-12-13 04:25:23.582261] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:23.662 04:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.663 04:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:23.663 04:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:23.663 04:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:23.663 04:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.663 04:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.663 04:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:23.663 04:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:23.663 04:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.663 04:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.663 04:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.663 04:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.663 04:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.663 04:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.663 04:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.663 04:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.663 04:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.663 04:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.663 04:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.663 "name": "Existed_Raid", 00:09:23.663 "uuid": "be830dfe-5b4b-459a-aaec-031df7095f03", 00:09:23.663 "strip_size_kb": 0, 00:09:23.663 "state": "configuring", 00:09:23.663 "raid_level": "raid1", 00:09:23.663 "superblock": true, 00:09:23.663 "num_base_bdevs": 3, 00:09:23.663 "num_base_bdevs_discovered": 1, 00:09:23.663 "num_base_bdevs_operational": 3, 00:09:23.663 "base_bdevs_list": [ 00:09:23.663 { 00:09:23.663 "name": "BaseBdev1", 00:09:23.663 "uuid": "abec52a6-b10b-457d-8059-b22afcb2ca2c", 00:09:23.663 "is_configured": true, 00:09:23.663 "data_offset": 2048, 00:09:23.663 "data_size": 63488 00:09:23.663 }, 00:09:23.663 { 00:09:23.663 "name": "BaseBdev2", 00:09:23.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.663 "is_configured": false, 00:09:23.663 "data_offset": 0, 00:09:23.663 "data_size": 0 00:09:23.663 }, 00:09:23.663 { 00:09:23.663 "name": "BaseBdev3", 00:09:23.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.663 "is_configured": false, 00:09:23.663 "data_offset": 0, 00:09:23.663 "data_size": 0 00:09:23.663 } 00:09:23.663 ] 00:09:23.663 }' 00:09:23.663 04:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.663 04:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.233 04:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:24.233 04:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.233 04:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.233 [2024-12-13 04:25:24.056038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:24.233 BaseBdev2 00:09:24.233 04:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.233 04:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:24.233 04:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:24.233 04:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:24.233 04:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:24.233 04:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:24.233 04:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:24.233 04:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:24.233 04:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.233 04:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.233 04:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.233 04:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:24.233 04:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.233 04:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.233 [ 00:09:24.233 { 00:09:24.233 "name": "BaseBdev2", 00:09:24.233 "aliases": [ 00:09:24.233 "8fd303ef-9480-4f46-900b-d7cfcf5608f3" 00:09:24.233 ], 00:09:24.233 "product_name": "Malloc disk", 00:09:24.233 "block_size": 512, 00:09:24.233 "num_blocks": 65536, 00:09:24.233 "uuid": "8fd303ef-9480-4f46-900b-d7cfcf5608f3", 00:09:24.233 "assigned_rate_limits": { 00:09:24.233 "rw_ios_per_sec": 0, 00:09:24.233 "rw_mbytes_per_sec": 0, 00:09:24.233 "r_mbytes_per_sec": 0, 00:09:24.233 "w_mbytes_per_sec": 0 00:09:24.233 }, 00:09:24.233 "claimed": true, 00:09:24.233 "claim_type": "exclusive_write", 00:09:24.233 "zoned": false, 00:09:24.233 "supported_io_types": { 00:09:24.233 "read": true, 00:09:24.233 "write": true, 00:09:24.233 "unmap": true, 00:09:24.233 "flush": true, 00:09:24.233 "reset": true, 00:09:24.233 "nvme_admin": false, 00:09:24.233 "nvme_io": false, 00:09:24.233 "nvme_io_md": false, 00:09:24.233 "write_zeroes": true, 00:09:24.233 "zcopy": true, 00:09:24.233 "get_zone_info": false, 00:09:24.233 "zone_management": false, 00:09:24.233 "zone_append": false, 00:09:24.233 "compare": false, 00:09:24.233 "compare_and_write": false, 00:09:24.233 "abort": true, 00:09:24.233 "seek_hole": false, 00:09:24.233 "seek_data": false, 00:09:24.233 "copy": true, 00:09:24.233 "nvme_iov_md": false 00:09:24.233 }, 00:09:24.233 "memory_domains": [ 00:09:24.233 { 00:09:24.233 "dma_device_id": "system", 00:09:24.233 "dma_device_type": 1 00:09:24.233 }, 00:09:24.233 { 00:09:24.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.233 "dma_device_type": 2 00:09:24.233 } 00:09:24.233 ], 00:09:24.233 "driver_specific": {} 00:09:24.233 } 00:09:24.233 ] 00:09:24.233 04:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.233 04:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:24.233 04:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:24.233 04:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:24.233 04:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:24.233 04:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.233 04:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.233 04:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:24.233 04:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:24.233 04:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.233 04:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.233 04:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.233 04:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.233 04:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.233 04:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.233 04:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.233 04:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.233 04:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.233 04:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.233 04:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.233 "name": "Existed_Raid", 00:09:24.233 "uuid": "be830dfe-5b4b-459a-aaec-031df7095f03", 00:09:24.233 "strip_size_kb": 0, 00:09:24.233 "state": "configuring", 00:09:24.233 "raid_level": "raid1", 00:09:24.233 "superblock": true, 00:09:24.233 "num_base_bdevs": 3, 00:09:24.233 "num_base_bdevs_discovered": 2, 00:09:24.233 "num_base_bdevs_operational": 3, 00:09:24.233 "base_bdevs_list": [ 00:09:24.233 { 00:09:24.233 "name": "BaseBdev1", 00:09:24.233 "uuid": "abec52a6-b10b-457d-8059-b22afcb2ca2c", 00:09:24.233 "is_configured": true, 00:09:24.233 "data_offset": 2048, 00:09:24.233 "data_size": 63488 00:09:24.233 }, 00:09:24.233 { 00:09:24.233 "name": "BaseBdev2", 00:09:24.233 "uuid": "8fd303ef-9480-4f46-900b-d7cfcf5608f3", 00:09:24.233 "is_configured": true, 00:09:24.233 "data_offset": 2048, 00:09:24.233 "data_size": 63488 00:09:24.233 }, 00:09:24.233 { 00:09:24.233 "name": "BaseBdev3", 00:09:24.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.233 "is_configured": false, 00:09:24.233 "data_offset": 0, 00:09:24.233 "data_size": 0 00:09:24.233 } 00:09:24.233 ] 00:09:24.233 }' 00:09:24.233 04:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.233 04:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.493 04:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:24.493 04:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.493 04:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.755 [2024-12-13 04:25:24.530962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:24.755 [2024-12-13 04:25:24.531730] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:24.755 [2024-12-13 04:25:24.531823] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:24.755 BaseBdev3 00:09:24.755 [2024-12-13 04:25:24.532813] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:09:24.755 [2024-12-13 04:25:24.533277] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:24.755 [2024-12-13 04:25:24.533314] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:09:24.755 04:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.755 04:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:24.755 [2024-12-13 04:25:24.533763] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:24.755 04:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:24.755 04:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:24.755 04:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:24.755 04:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:24.755 04:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:24.755 04:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:24.755 04:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.755 04:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.755 04:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.755 04:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:24.755 04:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.755 04:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.755 [ 00:09:24.755 { 00:09:24.755 "name": "BaseBdev3", 00:09:24.755 "aliases": [ 00:09:24.755 "4af875de-246f-4162-85bd-e6301d3e3b15" 00:09:24.755 ], 00:09:24.755 "product_name": "Malloc disk", 00:09:24.755 "block_size": 512, 00:09:24.755 "num_blocks": 65536, 00:09:24.755 "uuid": "4af875de-246f-4162-85bd-e6301d3e3b15", 00:09:24.755 "assigned_rate_limits": { 00:09:24.755 "rw_ios_per_sec": 0, 00:09:24.755 "rw_mbytes_per_sec": 0, 00:09:24.755 "r_mbytes_per_sec": 0, 00:09:24.755 "w_mbytes_per_sec": 0 00:09:24.755 }, 00:09:24.755 "claimed": true, 00:09:24.755 "claim_type": "exclusive_write", 00:09:24.755 "zoned": false, 00:09:24.755 "supported_io_types": { 00:09:24.755 "read": true, 00:09:24.755 "write": true, 00:09:24.755 "unmap": true, 00:09:24.755 "flush": true, 00:09:24.755 "reset": true, 00:09:24.755 "nvme_admin": false, 00:09:24.755 "nvme_io": false, 00:09:24.755 "nvme_io_md": false, 00:09:24.755 "write_zeroes": true, 00:09:24.755 "zcopy": true, 00:09:24.755 "get_zone_info": false, 00:09:24.755 "zone_management": false, 00:09:24.755 "zone_append": false, 00:09:24.755 "compare": false, 00:09:24.755 "compare_and_write": false, 00:09:24.755 "abort": true, 00:09:24.755 "seek_hole": false, 00:09:24.755 "seek_data": false, 00:09:24.755 "copy": true, 00:09:24.755 "nvme_iov_md": false 00:09:24.755 }, 00:09:24.755 "memory_domains": [ 00:09:24.755 { 00:09:24.755 "dma_device_id": "system", 00:09:24.755 "dma_device_type": 1 00:09:24.755 }, 00:09:24.755 { 00:09:24.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.755 "dma_device_type": 2 00:09:24.755 } 00:09:24.755 ], 00:09:24.755 "driver_specific": {} 00:09:24.755 } 00:09:24.755 ] 00:09:24.755 04:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.755 04:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:24.755 04:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:24.755 04:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:24.755 04:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:24.756 04:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.756 04:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:24.756 04:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:24.756 04:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:24.756 04:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.756 04:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.756 04:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.756 04:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.756 04:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.756 04:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.756 04:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.756 04:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.756 04:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.756 04:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.756 04:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.756 "name": "Existed_Raid", 00:09:24.756 "uuid": "be830dfe-5b4b-459a-aaec-031df7095f03", 00:09:24.756 "strip_size_kb": 0, 00:09:24.756 "state": "online", 00:09:24.756 "raid_level": "raid1", 00:09:24.756 "superblock": true, 00:09:24.756 "num_base_bdevs": 3, 00:09:24.756 "num_base_bdevs_discovered": 3, 00:09:24.756 "num_base_bdevs_operational": 3, 00:09:24.756 "base_bdevs_list": [ 00:09:24.756 { 00:09:24.756 "name": "BaseBdev1", 00:09:24.756 "uuid": "abec52a6-b10b-457d-8059-b22afcb2ca2c", 00:09:24.756 "is_configured": true, 00:09:24.756 "data_offset": 2048, 00:09:24.756 "data_size": 63488 00:09:24.756 }, 00:09:24.756 { 00:09:24.756 "name": "BaseBdev2", 00:09:24.756 "uuid": "8fd303ef-9480-4f46-900b-d7cfcf5608f3", 00:09:24.756 "is_configured": true, 00:09:24.756 "data_offset": 2048, 00:09:24.756 "data_size": 63488 00:09:24.756 }, 00:09:24.756 { 00:09:24.756 "name": "BaseBdev3", 00:09:24.756 "uuid": "4af875de-246f-4162-85bd-e6301d3e3b15", 00:09:24.756 "is_configured": true, 00:09:24.756 "data_offset": 2048, 00:09:24.756 "data_size": 63488 00:09:24.756 } 00:09:24.756 ] 00:09:24.756 }' 00:09:24.756 04:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.756 04:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.019 04:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:25.019 04:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:25.019 04:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:25.019 04:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:25.019 04:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:25.019 04:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:25.019 04:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:25.019 04:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:25.019 04:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.020 04:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.020 [2024-12-13 04:25:25.002430] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:25.020 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.282 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:25.282 "name": "Existed_Raid", 00:09:25.282 "aliases": [ 00:09:25.282 "be830dfe-5b4b-459a-aaec-031df7095f03" 00:09:25.282 ], 00:09:25.282 "product_name": "Raid Volume", 00:09:25.282 "block_size": 512, 00:09:25.282 "num_blocks": 63488, 00:09:25.282 "uuid": "be830dfe-5b4b-459a-aaec-031df7095f03", 00:09:25.282 "assigned_rate_limits": { 00:09:25.282 "rw_ios_per_sec": 0, 00:09:25.282 "rw_mbytes_per_sec": 0, 00:09:25.282 "r_mbytes_per_sec": 0, 00:09:25.282 "w_mbytes_per_sec": 0 00:09:25.282 }, 00:09:25.282 "claimed": false, 00:09:25.282 "zoned": false, 00:09:25.282 "supported_io_types": { 00:09:25.282 "read": true, 00:09:25.282 "write": true, 00:09:25.282 "unmap": false, 00:09:25.282 "flush": false, 00:09:25.282 "reset": true, 00:09:25.282 "nvme_admin": false, 00:09:25.282 "nvme_io": false, 00:09:25.282 "nvme_io_md": false, 00:09:25.282 "write_zeroes": true, 00:09:25.282 "zcopy": false, 00:09:25.282 "get_zone_info": false, 00:09:25.282 "zone_management": false, 00:09:25.282 "zone_append": false, 00:09:25.282 "compare": false, 00:09:25.282 "compare_and_write": false, 00:09:25.282 "abort": false, 00:09:25.282 "seek_hole": false, 00:09:25.282 "seek_data": false, 00:09:25.282 "copy": false, 00:09:25.282 "nvme_iov_md": false 00:09:25.282 }, 00:09:25.282 "memory_domains": [ 00:09:25.282 { 00:09:25.282 "dma_device_id": "system", 00:09:25.282 "dma_device_type": 1 00:09:25.282 }, 00:09:25.282 { 00:09:25.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.282 "dma_device_type": 2 00:09:25.282 }, 00:09:25.282 { 00:09:25.282 "dma_device_id": "system", 00:09:25.282 "dma_device_type": 1 00:09:25.282 }, 00:09:25.282 { 00:09:25.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.282 "dma_device_type": 2 00:09:25.282 }, 00:09:25.282 { 00:09:25.282 "dma_device_id": "system", 00:09:25.282 "dma_device_type": 1 00:09:25.282 }, 00:09:25.282 { 00:09:25.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.282 "dma_device_type": 2 00:09:25.282 } 00:09:25.282 ], 00:09:25.282 "driver_specific": { 00:09:25.282 "raid": { 00:09:25.282 "uuid": "be830dfe-5b4b-459a-aaec-031df7095f03", 00:09:25.282 "strip_size_kb": 0, 00:09:25.282 "state": "online", 00:09:25.282 "raid_level": "raid1", 00:09:25.282 "superblock": true, 00:09:25.282 "num_base_bdevs": 3, 00:09:25.282 "num_base_bdevs_discovered": 3, 00:09:25.282 "num_base_bdevs_operational": 3, 00:09:25.282 "base_bdevs_list": [ 00:09:25.282 { 00:09:25.282 "name": "BaseBdev1", 00:09:25.282 "uuid": "abec52a6-b10b-457d-8059-b22afcb2ca2c", 00:09:25.282 "is_configured": true, 00:09:25.282 "data_offset": 2048, 00:09:25.282 "data_size": 63488 00:09:25.282 }, 00:09:25.282 { 00:09:25.282 "name": "BaseBdev2", 00:09:25.282 "uuid": "8fd303ef-9480-4f46-900b-d7cfcf5608f3", 00:09:25.282 "is_configured": true, 00:09:25.282 "data_offset": 2048, 00:09:25.282 "data_size": 63488 00:09:25.282 }, 00:09:25.282 { 00:09:25.282 "name": "BaseBdev3", 00:09:25.282 "uuid": "4af875de-246f-4162-85bd-e6301d3e3b15", 00:09:25.282 "is_configured": true, 00:09:25.282 "data_offset": 2048, 00:09:25.282 "data_size": 63488 00:09:25.282 } 00:09:25.282 ] 00:09:25.282 } 00:09:25.282 } 00:09:25.282 }' 00:09:25.282 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:25.282 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:25.282 BaseBdev2 00:09:25.282 BaseBdev3' 00:09:25.282 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.282 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:25.282 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.282 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.282 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:25.282 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.282 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.282 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.282 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.282 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.282 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.282 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.282 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:25.282 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.282 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.282 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.282 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.282 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.282 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.282 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:25.282 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.282 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.282 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.282 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.282 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.282 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.282 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:25.282 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.282 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.282 [2024-12-13 04:25:25.265691] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:25.282 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.282 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:25.282 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:25.282 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:25.282 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:25.282 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:25.282 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:25.282 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.282 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:25.282 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:25.282 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:25.282 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:25.282 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.282 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.282 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.282 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.541 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.541 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.541 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.541 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.541 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.541 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.541 "name": "Existed_Raid", 00:09:25.541 "uuid": "be830dfe-5b4b-459a-aaec-031df7095f03", 00:09:25.541 "strip_size_kb": 0, 00:09:25.541 "state": "online", 00:09:25.541 "raid_level": "raid1", 00:09:25.541 "superblock": true, 00:09:25.541 "num_base_bdevs": 3, 00:09:25.541 "num_base_bdevs_discovered": 2, 00:09:25.541 "num_base_bdevs_operational": 2, 00:09:25.541 "base_bdevs_list": [ 00:09:25.541 { 00:09:25.541 "name": null, 00:09:25.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.541 "is_configured": false, 00:09:25.541 "data_offset": 0, 00:09:25.541 "data_size": 63488 00:09:25.541 }, 00:09:25.541 { 00:09:25.541 "name": "BaseBdev2", 00:09:25.541 "uuid": "8fd303ef-9480-4f46-900b-d7cfcf5608f3", 00:09:25.541 "is_configured": true, 00:09:25.541 "data_offset": 2048, 00:09:25.541 "data_size": 63488 00:09:25.541 }, 00:09:25.541 { 00:09:25.541 "name": "BaseBdev3", 00:09:25.541 "uuid": "4af875de-246f-4162-85bd-e6301d3e3b15", 00:09:25.541 "is_configured": true, 00:09:25.541 "data_offset": 2048, 00:09:25.541 "data_size": 63488 00:09:25.541 } 00:09:25.541 ] 00:09:25.541 }' 00:09:25.541 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.541 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.801 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:25.801 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:25.801 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.801 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:25.801 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.801 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.801 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.801 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:25.801 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:25.801 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:25.801 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.801 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.801 [2024-12-13 04:25:25.773983] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:25.801 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.801 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:25.801 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:25.801 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:25.801 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.801 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.801 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.062 [2024-12-13 04:25:25.838511] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:26.062 [2024-12-13 04:25:25.838685] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:26.062 [2024-12-13 04:25:25.860152] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:26.062 [2024-12-13 04:25:25.860208] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:26.062 [2024-12-13 04:25:25.860225] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.062 BaseBdev2 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.062 [ 00:09:26.062 { 00:09:26.062 "name": "BaseBdev2", 00:09:26.062 "aliases": [ 00:09:26.062 "75594c24-d376-451f-ab8e-b7cfa13d7eee" 00:09:26.062 ], 00:09:26.062 "product_name": "Malloc disk", 00:09:26.062 "block_size": 512, 00:09:26.062 "num_blocks": 65536, 00:09:26.062 "uuid": "75594c24-d376-451f-ab8e-b7cfa13d7eee", 00:09:26.062 "assigned_rate_limits": { 00:09:26.062 "rw_ios_per_sec": 0, 00:09:26.062 "rw_mbytes_per_sec": 0, 00:09:26.062 "r_mbytes_per_sec": 0, 00:09:26.062 "w_mbytes_per_sec": 0 00:09:26.062 }, 00:09:26.062 "claimed": false, 00:09:26.062 "zoned": false, 00:09:26.062 "supported_io_types": { 00:09:26.062 "read": true, 00:09:26.062 "write": true, 00:09:26.062 "unmap": true, 00:09:26.062 "flush": true, 00:09:26.062 "reset": true, 00:09:26.062 "nvme_admin": false, 00:09:26.062 "nvme_io": false, 00:09:26.062 "nvme_io_md": false, 00:09:26.062 "write_zeroes": true, 00:09:26.062 "zcopy": true, 00:09:26.062 "get_zone_info": false, 00:09:26.062 "zone_management": false, 00:09:26.062 "zone_append": false, 00:09:26.062 "compare": false, 00:09:26.062 "compare_and_write": false, 00:09:26.062 "abort": true, 00:09:26.062 "seek_hole": false, 00:09:26.062 "seek_data": false, 00:09:26.062 "copy": true, 00:09:26.062 "nvme_iov_md": false 00:09:26.062 }, 00:09:26.062 "memory_domains": [ 00:09:26.062 { 00:09:26.062 "dma_device_id": "system", 00:09:26.062 "dma_device_type": 1 00:09:26.062 }, 00:09:26.062 { 00:09:26.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.062 "dma_device_type": 2 00:09:26.062 } 00:09:26.062 ], 00:09:26.062 "driver_specific": {} 00:09:26.062 } 00:09:26.062 ] 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.062 BaseBdev3 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:26.062 04:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.062 04:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.062 [ 00:09:26.062 { 00:09:26.062 "name": "BaseBdev3", 00:09:26.062 "aliases": [ 00:09:26.062 "6924f6c6-283c-4085-861f-c52d6f0e20f3" 00:09:26.062 ], 00:09:26.062 "product_name": "Malloc disk", 00:09:26.062 "block_size": 512, 00:09:26.062 "num_blocks": 65536, 00:09:26.062 "uuid": "6924f6c6-283c-4085-861f-c52d6f0e20f3", 00:09:26.062 "assigned_rate_limits": { 00:09:26.062 "rw_ios_per_sec": 0, 00:09:26.062 "rw_mbytes_per_sec": 0, 00:09:26.062 "r_mbytes_per_sec": 0, 00:09:26.062 "w_mbytes_per_sec": 0 00:09:26.062 }, 00:09:26.062 "claimed": false, 00:09:26.062 "zoned": false, 00:09:26.062 "supported_io_types": { 00:09:26.062 "read": true, 00:09:26.062 "write": true, 00:09:26.062 "unmap": true, 00:09:26.062 "flush": true, 00:09:26.062 "reset": true, 00:09:26.062 "nvme_admin": false, 00:09:26.062 "nvme_io": false, 00:09:26.062 "nvme_io_md": false, 00:09:26.062 "write_zeroes": true, 00:09:26.062 "zcopy": true, 00:09:26.062 "get_zone_info": false, 00:09:26.062 "zone_management": false, 00:09:26.062 "zone_append": false, 00:09:26.062 "compare": false, 00:09:26.062 "compare_and_write": false, 00:09:26.062 "abort": true, 00:09:26.062 "seek_hole": false, 00:09:26.062 "seek_data": false, 00:09:26.062 "copy": true, 00:09:26.062 "nvme_iov_md": false 00:09:26.062 }, 00:09:26.062 "memory_domains": [ 00:09:26.062 { 00:09:26.062 "dma_device_id": "system", 00:09:26.062 "dma_device_type": 1 00:09:26.062 }, 00:09:26.062 { 00:09:26.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.062 "dma_device_type": 2 00:09:26.062 } 00:09:26.062 ], 00:09:26.062 "driver_specific": {} 00:09:26.062 } 00:09:26.062 ] 00:09:26.062 04:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.062 04:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:26.062 04:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:26.062 04:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:26.062 04:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:26.062 04:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.062 04:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.063 [2024-12-13 04:25:26.031608] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:26.063 [2024-12-13 04:25:26.031702] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:26.063 [2024-12-13 04:25:26.031748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:26.063 [2024-12-13 04:25:26.033842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:26.063 04:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.063 04:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:26.063 04:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.063 04:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.063 04:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:26.063 04:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:26.063 04:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.063 04:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.063 04:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.063 04:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.063 04:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.063 04:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.063 04:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.063 04:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.063 04:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.063 04:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.322 04:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.322 "name": "Existed_Raid", 00:09:26.322 "uuid": "1db57410-592b-4f5e-b296-9edee978653c", 00:09:26.322 "strip_size_kb": 0, 00:09:26.322 "state": "configuring", 00:09:26.322 "raid_level": "raid1", 00:09:26.322 "superblock": true, 00:09:26.322 "num_base_bdevs": 3, 00:09:26.322 "num_base_bdevs_discovered": 2, 00:09:26.322 "num_base_bdevs_operational": 3, 00:09:26.322 "base_bdevs_list": [ 00:09:26.322 { 00:09:26.322 "name": "BaseBdev1", 00:09:26.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.322 "is_configured": false, 00:09:26.322 "data_offset": 0, 00:09:26.322 "data_size": 0 00:09:26.322 }, 00:09:26.322 { 00:09:26.322 "name": "BaseBdev2", 00:09:26.322 "uuid": "75594c24-d376-451f-ab8e-b7cfa13d7eee", 00:09:26.322 "is_configured": true, 00:09:26.322 "data_offset": 2048, 00:09:26.322 "data_size": 63488 00:09:26.322 }, 00:09:26.322 { 00:09:26.322 "name": "BaseBdev3", 00:09:26.322 "uuid": "6924f6c6-283c-4085-861f-c52d6f0e20f3", 00:09:26.322 "is_configured": true, 00:09:26.323 "data_offset": 2048, 00:09:26.323 "data_size": 63488 00:09:26.323 } 00:09:26.323 ] 00:09:26.323 }' 00:09:26.323 04:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.323 04:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.582 04:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:26.582 04:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.582 04:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.582 [2024-12-13 04:25:26.446966] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:26.582 04:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.582 04:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:26.582 04:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.582 04:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.582 04:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:26.582 04:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:26.582 04:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.582 04:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.582 04:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.582 04:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.582 04:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.582 04:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.582 04:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.582 04:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.582 04:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.582 04:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.583 04:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.583 "name": "Existed_Raid", 00:09:26.583 "uuid": "1db57410-592b-4f5e-b296-9edee978653c", 00:09:26.583 "strip_size_kb": 0, 00:09:26.583 "state": "configuring", 00:09:26.583 "raid_level": "raid1", 00:09:26.583 "superblock": true, 00:09:26.583 "num_base_bdevs": 3, 00:09:26.583 "num_base_bdevs_discovered": 1, 00:09:26.583 "num_base_bdevs_operational": 3, 00:09:26.583 "base_bdevs_list": [ 00:09:26.583 { 00:09:26.583 "name": "BaseBdev1", 00:09:26.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.583 "is_configured": false, 00:09:26.583 "data_offset": 0, 00:09:26.583 "data_size": 0 00:09:26.583 }, 00:09:26.583 { 00:09:26.583 "name": null, 00:09:26.583 "uuid": "75594c24-d376-451f-ab8e-b7cfa13d7eee", 00:09:26.583 "is_configured": false, 00:09:26.583 "data_offset": 0, 00:09:26.583 "data_size": 63488 00:09:26.583 }, 00:09:26.583 { 00:09:26.583 "name": "BaseBdev3", 00:09:26.583 "uuid": "6924f6c6-283c-4085-861f-c52d6f0e20f3", 00:09:26.583 "is_configured": true, 00:09:26.583 "data_offset": 2048, 00:09:26.583 "data_size": 63488 00:09:26.583 } 00:09:26.583 ] 00:09:26.583 }' 00:09:26.583 04:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.583 04:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.152 04:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.152 04:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.152 04:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.152 04:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:27.152 04:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.152 04:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:27.152 04:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:27.152 04:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.152 04:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.152 [2024-12-13 04:25:26.942844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:27.152 BaseBdev1 00:09:27.152 04:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.152 04:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:27.152 04:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:27.152 04:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:27.152 04:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:27.152 04:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:27.152 04:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:27.152 04:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:27.152 04:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.152 04:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.152 04:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.152 04:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:27.152 04:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.153 04:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.153 [ 00:09:27.153 { 00:09:27.153 "name": "BaseBdev1", 00:09:27.153 "aliases": [ 00:09:27.153 "0800ccc4-a1a4-475c-b212-c385c99931bf" 00:09:27.153 ], 00:09:27.153 "product_name": "Malloc disk", 00:09:27.153 "block_size": 512, 00:09:27.153 "num_blocks": 65536, 00:09:27.153 "uuid": "0800ccc4-a1a4-475c-b212-c385c99931bf", 00:09:27.153 "assigned_rate_limits": { 00:09:27.153 "rw_ios_per_sec": 0, 00:09:27.153 "rw_mbytes_per_sec": 0, 00:09:27.153 "r_mbytes_per_sec": 0, 00:09:27.153 "w_mbytes_per_sec": 0 00:09:27.153 }, 00:09:27.153 "claimed": true, 00:09:27.153 "claim_type": "exclusive_write", 00:09:27.153 "zoned": false, 00:09:27.153 "supported_io_types": { 00:09:27.153 "read": true, 00:09:27.153 "write": true, 00:09:27.153 "unmap": true, 00:09:27.153 "flush": true, 00:09:27.153 "reset": true, 00:09:27.153 "nvme_admin": false, 00:09:27.153 "nvme_io": false, 00:09:27.153 "nvme_io_md": false, 00:09:27.153 "write_zeroes": true, 00:09:27.153 "zcopy": true, 00:09:27.153 "get_zone_info": false, 00:09:27.153 "zone_management": false, 00:09:27.153 "zone_append": false, 00:09:27.153 "compare": false, 00:09:27.153 "compare_and_write": false, 00:09:27.153 "abort": true, 00:09:27.153 "seek_hole": false, 00:09:27.153 "seek_data": false, 00:09:27.153 "copy": true, 00:09:27.153 "nvme_iov_md": false 00:09:27.153 }, 00:09:27.153 "memory_domains": [ 00:09:27.153 { 00:09:27.153 "dma_device_id": "system", 00:09:27.153 "dma_device_type": 1 00:09:27.153 }, 00:09:27.153 { 00:09:27.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.153 "dma_device_type": 2 00:09:27.153 } 00:09:27.153 ], 00:09:27.153 "driver_specific": {} 00:09:27.153 } 00:09:27.153 ] 00:09:27.153 04:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.153 04:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:27.153 04:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:27.153 04:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.153 04:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.153 04:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:27.153 04:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:27.153 04:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.153 04:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.153 04:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.153 04:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.153 04:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.153 04:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.153 04:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.153 04:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.153 04:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.153 04:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.153 04:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.153 "name": "Existed_Raid", 00:09:27.153 "uuid": "1db57410-592b-4f5e-b296-9edee978653c", 00:09:27.153 "strip_size_kb": 0, 00:09:27.153 "state": "configuring", 00:09:27.153 "raid_level": "raid1", 00:09:27.153 "superblock": true, 00:09:27.153 "num_base_bdevs": 3, 00:09:27.153 "num_base_bdevs_discovered": 2, 00:09:27.153 "num_base_bdevs_operational": 3, 00:09:27.153 "base_bdevs_list": [ 00:09:27.153 { 00:09:27.153 "name": "BaseBdev1", 00:09:27.153 "uuid": "0800ccc4-a1a4-475c-b212-c385c99931bf", 00:09:27.153 "is_configured": true, 00:09:27.153 "data_offset": 2048, 00:09:27.153 "data_size": 63488 00:09:27.153 }, 00:09:27.153 { 00:09:27.153 "name": null, 00:09:27.153 "uuid": "75594c24-d376-451f-ab8e-b7cfa13d7eee", 00:09:27.153 "is_configured": false, 00:09:27.153 "data_offset": 0, 00:09:27.153 "data_size": 63488 00:09:27.153 }, 00:09:27.153 { 00:09:27.153 "name": "BaseBdev3", 00:09:27.153 "uuid": "6924f6c6-283c-4085-861f-c52d6f0e20f3", 00:09:27.153 "is_configured": true, 00:09:27.153 "data_offset": 2048, 00:09:27.153 "data_size": 63488 00:09:27.153 } 00:09:27.153 ] 00:09:27.153 }' 00:09:27.153 04:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.153 04:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.412 04:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:27.412 04:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.412 04:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.412 04:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.412 04:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.412 04:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:27.672 04:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:27.672 04:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.672 04:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.672 [2024-12-13 04:25:27.434031] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:27.672 04:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.672 04:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:27.672 04:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.672 04:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.672 04:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:27.672 04:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:27.672 04:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.672 04:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.672 04:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.672 04:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.672 04:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.672 04:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.672 04:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.672 04:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.672 04:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.672 04:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.672 04:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.672 "name": "Existed_Raid", 00:09:27.672 "uuid": "1db57410-592b-4f5e-b296-9edee978653c", 00:09:27.672 "strip_size_kb": 0, 00:09:27.672 "state": "configuring", 00:09:27.672 "raid_level": "raid1", 00:09:27.672 "superblock": true, 00:09:27.672 "num_base_bdevs": 3, 00:09:27.672 "num_base_bdevs_discovered": 1, 00:09:27.672 "num_base_bdevs_operational": 3, 00:09:27.672 "base_bdevs_list": [ 00:09:27.672 { 00:09:27.672 "name": "BaseBdev1", 00:09:27.672 "uuid": "0800ccc4-a1a4-475c-b212-c385c99931bf", 00:09:27.672 "is_configured": true, 00:09:27.672 "data_offset": 2048, 00:09:27.672 "data_size": 63488 00:09:27.672 }, 00:09:27.672 { 00:09:27.672 "name": null, 00:09:27.672 "uuid": "75594c24-d376-451f-ab8e-b7cfa13d7eee", 00:09:27.672 "is_configured": false, 00:09:27.672 "data_offset": 0, 00:09:27.672 "data_size": 63488 00:09:27.672 }, 00:09:27.672 { 00:09:27.672 "name": null, 00:09:27.672 "uuid": "6924f6c6-283c-4085-861f-c52d6f0e20f3", 00:09:27.672 "is_configured": false, 00:09:27.672 "data_offset": 0, 00:09:27.672 "data_size": 63488 00:09:27.672 } 00:09:27.672 ] 00:09:27.672 }' 00:09:27.672 04:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.672 04:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.933 04:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:27.933 04:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.933 04:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.933 04:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.933 04:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.933 04:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:27.933 04:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:27.933 04:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.933 04:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.933 [2024-12-13 04:25:27.917222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:27.933 04:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.933 04:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:27.933 04:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.933 04:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.933 04:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:27.933 04:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:27.933 04:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.933 04:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.933 04:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.933 04:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.933 04:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.933 04:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.933 04:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.933 04:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.933 04:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.193 04:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.193 04:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.193 "name": "Existed_Raid", 00:09:28.193 "uuid": "1db57410-592b-4f5e-b296-9edee978653c", 00:09:28.193 "strip_size_kb": 0, 00:09:28.193 "state": "configuring", 00:09:28.193 "raid_level": "raid1", 00:09:28.193 "superblock": true, 00:09:28.193 "num_base_bdevs": 3, 00:09:28.193 "num_base_bdevs_discovered": 2, 00:09:28.193 "num_base_bdevs_operational": 3, 00:09:28.193 "base_bdevs_list": [ 00:09:28.193 { 00:09:28.193 "name": "BaseBdev1", 00:09:28.193 "uuid": "0800ccc4-a1a4-475c-b212-c385c99931bf", 00:09:28.193 "is_configured": true, 00:09:28.193 "data_offset": 2048, 00:09:28.193 "data_size": 63488 00:09:28.193 }, 00:09:28.193 { 00:09:28.193 "name": null, 00:09:28.193 "uuid": "75594c24-d376-451f-ab8e-b7cfa13d7eee", 00:09:28.193 "is_configured": false, 00:09:28.193 "data_offset": 0, 00:09:28.193 "data_size": 63488 00:09:28.193 }, 00:09:28.193 { 00:09:28.193 "name": "BaseBdev3", 00:09:28.193 "uuid": "6924f6c6-283c-4085-861f-c52d6f0e20f3", 00:09:28.193 "is_configured": true, 00:09:28.193 "data_offset": 2048, 00:09:28.193 "data_size": 63488 00:09:28.193 } 00:09:28.193 ] 00:09:28.193 }' 00:09:28.193 04:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.193 04:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.453 04:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.453 04:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.453 04:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.453 04:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:28.453 04:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.453 04:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:28.453 04:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:28.453 04:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.453 04:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.453 [2024-12-13 04:25:28.460401] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:28.713 04:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.713 04:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:28.713 04:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.713 04:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.713 04:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:28.713 04:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:28.713 04:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.713 04:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.713 04:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.713 04:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.713 04:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.713 04:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.713 04:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.713 04:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.713 04:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.713 04:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.713 04:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.713 "name": "Existed_Raid", 00:09:28.713 "uuid": "1db57410-592b-4f5e-b296-9edee978653c", 00:09:28.713 "strip_size_kb": 0, 00:09:28.713 "state": "configuring", 00:09:28.713 "raid_level": "raid1", 00:09:28.713 "superblock": true, 00:09:28.713 "num_base_bdevs": 3, 00:09:28.713 "num_base_bdevs_discovered": 1, 00:09:28.713 "num_base_bdevs_operational": 3, 00:09:28.713 "base_bdevs_list": [ 00:09:28.713 { 00:09:28.713 "name": null, 00:09:28.713 "uuid": "0800ccc4-a1a4-475c-b212-c385c99931bf", 00:09:28.713 "is_configured": false, 00:09:28.713 "data_offset": 0, 00:09:28.713 "data_size": 63488 00:09:28.713 }, 00:09:28.713 { 00:09:28.713 "name": null, 00:09:28.713 "uuid": "75594c24-d376-451f-ab8e-b7cfa13d7eee", 00:09:28.713 "is_configured": false, 00:09:28.713 "data_offset": 0, 00:09:28.713 "data_size": 63488 00:09:28.713 }, 00:09:28.713 { 00:09:28.713 "name": "BaseBdev3", 00:09:28.713 "uuid": "6924f6c6-283c-4085-861f-c52d6f0e20f3", 00:09:28.713 "is_configured": true, 00:09:28.713 "data_offset": 2048, 00:09:28.713 "data_size": 63488 00:09:28.713 } 00:09:28.713 ] 00:09:28.713 }' 00:09:28.713 04:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.713 04:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.973 04:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:28.973 04:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.973 04:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.973 04:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.973 04:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.973 04:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:28.973 04:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:28.973 04:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.973 04:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.973 [2024-12-13 04:25:28.943584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:28.973 04:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.973 04:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:28.973 04:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.973 04:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.973 04:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:28.973 04:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:28.973 04:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.973 04:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.973 04:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.973 04:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.973 04:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.973 04:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.973 04:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.973 04:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.973 04:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.973 04:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.233 04:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.233 "name": "Existed_Raid", 00:09:29.233 "uuid": "1db57410-592b-4f5e-b296-9edee978653c", 00:09:29.233 "strip_size_kb": 0, 00:09:29.233 "state": "configuring", 00:09:29.233 "raid_level": "raid1", 00:09:29.233 "superblock": true, 00:09:29.233 "num_base_bdevs": 3, 00:09:29.233 "num_base_bdevs_discovered": 2, 00:09:29.233 "num_base_bdevs_operational": 3, 00:09:29.233 "base_bdevs_list": [ 00:09:29.233 { 00:09:29.233 "name": null, 00:09:29.233 "uuid": "0800ccc4-a1a4-475c-b212-c385c99931bf", 00:09:29.233 "is_configured": false, 00:09:29.233 "data_offset": 0, 00:09:29.233 "data_size": 63488 00:09:29.233 }, 00:09:29.233 { 00:09:29.233 "name": "BaseBdev2", 00:09:29.233 "uuid": "75594c24-d376-451f-ab8e-b7cfa13d7eee", 00:09:29.233 "is_configured": true, 00:09:29.233 "data_offset": 2048, 00:09:29.233 "data_size": 63488 00:09:29.233 }, 00:09:29.233 { 00:09:29.233 "name": "BaseBdev3", 00:09:29.233 "uuid": "6924f6c6-283c-4085-861f-c52d6f0e20f3", 00:09:29.233 "is_configured": true, 00:09:29.233 "data_offset": 2048, 00:09:29.233 "data_size": 63488 00:09:29.233 } 00:09:29.233 ] 00:09:29.233 }' 00:09:29.233 04:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.233 04:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.493 04:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.493 04:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.493 04:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.493 04:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:29.493 04:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.493 04:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:29.493 04:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.493 04:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.493 04:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:29.493 04:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.493 04:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.493 04:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0800ccc4-a1a4-475c-b212-c385c99931bf 00:09:29.493 04:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.493 04:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.493 [2024-12-13 04:25:29.503179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:29.493 [2024-12-13 04:25:29.503376] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:29.493 [2024-12-13 04:25:29.503388] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:29.493 NewBaseBdev 00:09:29.493 [2024-12-13 04:25:29.503696] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:09:29.493 [2024-12-13 04:25:29.503829] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:29.493 [2024-12-13 04:25:29.503845] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:09:29.493 [2024-12-13 04:25:29.503952] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:29.493 04:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.493 04:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:29.493 04:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:29.493 04:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:29.493 04:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:29.493 04:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:29.493 04:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:29.493 04:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:29.493 04:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.493 04:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.753 04:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.754 04:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:29.754 04:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.754 04:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.754 [ 00:09:29.754 { 00:09:29.754 "name": "NewBaseBdev", 00:09:29.754 "aliases": [ 00:09:29.754 "0800ccc4-a1a4-475c-b212-c385c99931bf" 00:09:29.754 ], 00:09:29.754 "product_name": "Malloc disk", 00:09:29.754 "block_size": 512, 00:09:29.754 "num_blocks": 65536, 00:09:29.754 "uuid": "0800ccc4-a1a4-475c-b212-c385c99931bf", 00:09:29.754 "assigned_rate_limits": { 00:09:29.754 "rw_ios_per_sec": 0, 00:09:29.754 "rw_mbytes_per_sec": 0, 00:09:29.754 "r_mbytes_per_sec": 0, 00:09:29.754 "w_mbytes_per_sec": 0 00:09:29.754 }, 00:09:29.754 "claimed": true, 00:09:29.754 "claim_type": "exclusive_write", 00:09:29.754 "zoned": false, 00:09:29.754 "supported_io_types": { 00:09:29.754 "read": true, 00:09:29.754 "write": true, 00:09:29.754 "unmap": true, 00:09:29.754 "flush": true, 00:09:29.754 "reset": true, 00:09:29.754 "nvme_admin": false, 00:09:29.754 "nvme_io": false, 00:09:29.754 "nvme_io_md": false, 00:09:29.754 "write_zeroes": true, 00:09:29.754 "zcopy": true, 00:09:29.754 "get_zone_info": false, 00:09:29.754 "zone_management": false, 00:09:29.754 "zone_append": false, 00:09:29.754 "compare": false, 00:09:29.754 "compare_and_write": false, 00:09:29.754 "abort": true, 00:09:29.754 "seek_hole": false, 00:09:29.754 "seek_data": false, 00:09:29.754 "copy": true, 00:09:29.754 "nvme_iov_md": false 00:09:29.754 }, 00:09:29.754 "memory_domains": [ 00:09:29.754 { 00:09:29.754 "dma_device_id": "system", 00:09:29.754 "dma_device_type": 1 00:09:29.754 }, 00:09:29.754 { 00:09:29.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.754 "dma_device_type": 2 00:09:29.754 } 00:09:29.754 ], 00:09:29.754 "driver_specific": {} 00:09:29.754 } 00:09:29.754 ] 00:09:29.754 04:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.754 04:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:29.754 04:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:29.754 04:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.754 04:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:29.754 04:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:29.754 04:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:29.754 04:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.754 04:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.754 04:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.754 04:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.754 04:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.754 04:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.754 04:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.754 04:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.754 04:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.754 04:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.754 04:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.754 "name": "Existed_Raid", 00:09:29.754 "uuid": "1db57410-592b-4f5e-b296-9edee978653c", 00:09:29.754 "strip_size_kb": 0, 00:09:29.754 "state": "online", 00:09:29.754 "raid_level": "raid1", 00:09:29.754 "superblock": true, 00:09:29.754 "num_base_bdevs": 3, 00:09:29.754 "num_base_bdevs_discovered": 3, 00:09:29.754 "num_base_bdevs_operational": 3, 00:09:29.754 "base_bdevs_list": [ 00:09:29.754 { 00:09:29.754 "name": "NewBaseBdev", 00:09:29.754 "uuid": "0800ccc4-a1a4-475c-b212-c385c99931bf", 00:09:29.754 "is_configured": true, 00:09:29.754 "data_offset": 2048, 00:09:29.754 "data_size": 63488 00:09:29.754 }, 00:09:29.754 { 00:09:29.754 "name": "BaseBdev2", 00:09:29.754 "uuid": "75594c24-d376-451f-ab8e-b7cfa13d7eee", 00:09:29.754 "is_configured": true, 00:09:29.754 "data_offset": 2048, 00:09:29.754 "data_size": 63488 00:09:29.754 }, 00:09:29.754 { 00:09:29.754 "name": "BaseBdev3", 00:09:29.754 "uuid": "6924f6c6-283c-4085-861f-c52d6f0e20f3", 00:09:29.754 "is_configured": true, 00:09:29.754 "data_offset": 2048, 00:09:29.754 "data_size": 63488 00:09:29.754 } 00:09:29.754 ] 00:09:29.754 }' 00:09:29.754 04:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.754 04:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.014 04:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:30.014 04:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:30.014 04:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:30.014 04:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:30.014 04:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:30.014 04:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:30.014 04:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:30.014 04:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:30.014 04:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.014 04:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.014 [2024-12-13 04:25:29.926785] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:30.014 04:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.014 04:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:30.014 "name": "Existed_Raid", 00:09:30.014 "aliases": [ 00:09:30.014 "1db57410-592b-4f5e-b296-9edee978653c" 00:09:30.014 ], 00:09:30.014 "product_name": "Raid Volume", 00:09:30.014 "block_size": 512, 00:09:30.014 "num_blocks": 63488, 00:09:30.014 "uuid": "1db57410-592b-4f5e-b296-9edee978653c", 00:09:30.014 "assigned_rate_limits": { 00:09:30.014 "rw_ios_per_sec": 0, 00:09:30.014 "rw_mbytes_per_sec": 0, 00:09:30.014 "r_mbytes_per_sec": 0, 00:09:30.014 "w_mbytes_per_sec": 0 00:09:30.014 }, 00:09:30.014 "claimed": false, 00:09:30.014 "zoned": false, 00:09:30.014 "supported_io_types": { 00:09:30.014 "read": true, 00:09:30.014 "write": true, 00:09:30.014 "unmap": false, 00:09:30.014 "flush": false, 00:09:30.014 "reset": true, 00:09:30.014 "nvme_admin": false, 00:09:30.014 "nvme_io": false, 00:09:30.014 "nvme_io_md": false, 00:09:30.014 "write_zeroes": true, 00:09:30.014 "zcopy": false, 00:09:30.014 "get_zone_info": false, 00:09:30.014 "zone_management": false, 00:09:30.014 "zone_append": false, 00:09:30.014 "compare": false, 00:09:30.014 "compare_and_write": false, 00:09:30.014 "abort": false, 00:09:30.014 "seek_hole": false, 00:09:30.014 "seek_data": false, 00:09:30.014 "copy": false, 00:09:30.014 "nvme_iov_md": false 00:09:30.014 }, 00:09:30.014 "memory_domains": [ 00:09:30.014 { 00:09:30.014 "dma_device_id": "system", 00:09:30.014 "dma_device_type": 1 00:09:30.014 }, 00:09:30.014 { 00:09:30.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.014 "dma_device_type": 2 00:09:30.014 }, 00:09:30.014 { 00:09:30.014 "dma_device_id": "system", 00:09:30.014 "dma_device_type": 1 00:09:30.014 }, 00:09:30.014 { 00:09:30.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.014 "dma_device_type": 2 00:09:30.014 }, 00:09:30.014 { 00:09:30.014 "dma_device_id": "system", 00:09:30.014 "dma_device_type": 1 00:09:30.014 }, 00:09:30.014 { 00:09:30.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.014 "dma_device_type": 2 00:09:30.014 } 00:09:30.014 ], 00:09:30.014 "driver_specific": { 00:09:30.014 "raid": { 00:09:30.014 "uuid": "1db57410-592b-4f5e-b296-9edee978653c", 00:09:30.014 "strip_size_kb": 0, 00:09:30.014 "state": "online", 00:09:30.014 "raid_level": "raid1", 00:09:30.014 "superblock": true, 00:09:30.014 "num_base_bdevs": 3, 00:09:30.014 "num_base_bdevs_discovered": 3, 00:09:30.014 "num_base_bdevs_operational": 3, 00:09:30.014 "base_bdevs_list": [ 00:09:30.014 { 00:09:30.014 "name": "NewBaseBdev", 00:09:30.014 "uuid": "0800ccc4-a1a4-475c-b212-c385c99931bf", 00:09:30.014 "is_configured": true, 00:09:30.014 "data_offset": 2048, 00:09:30.014 "data_size": 63488 00:09:30.014 }, 00:09:30.014 { 00:09:30.014 "name": "BaseBdev2", 00:09:30.014 "uuid": "75594c24-d376-451f-ab8e-b7cfa13d7eee", 00:09:30.014 "is_configured": true, 00:09:30.014 "data_offset": 2048, 00:09:30.014 "data_size": 63488 00:09:30.014 }, 00:09:30.014 { 00:09:30.014 "name": "BaseBdev3", 00:09:30.014 "uuid": "6924f6c6-283c-4085-861f-c52d6f0e20f3", 00:09:30.014 "is_configured": true, 00:09:30.014 "data_offset": 2048, 00:09:30.014 "data_size": 63488 00:09:30.014 } 00:09:30.014 ] 00:09:30.014 } 00:09:30.014 } 00:09:30.014 }' 00:09:30.014 04:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:30.014 04:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:30.014 BaseBdev2 00:09:30.014 BaseBdev3' 00:09:30.015 04:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.274 04:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:30.274 04:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.274 04:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:30.274 04:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.274 04:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.274 04:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.274 04:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.274 04:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.274 04:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.274 04:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.274 04:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:30.274 04:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.274 04:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.274 04:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.274 04:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.274 04:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.274 04:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.274 04:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.275 04:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.275 04:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:30.275 04:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.275 04:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.275 04:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.275 04:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.275 04:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.275 04:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:30.275 04:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.275 04:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.275 [2024-12-13 04:25:30.186030] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:30.275 [2024-12-13 04:25:30.186098] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:30.275 [2024-12-13 04:25:30.186180] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:30.275 [2024-12-13 04:25:30.186493] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:30.275 [2024-12-13 04:25:30.186505] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:09:30.275 04:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.275 04:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80733 00:09:30.275 04:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80733 ']' 00:09:30.275 04:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80733 00:09:30.275 04:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:30.275 04:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:30.275 04:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80733 00:09:30.275 killing process with pid 80733 00:09:30.275 04:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:30.275 04:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:30.275 04:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80733' 00:09:30.275 04:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80733 00:09:30.275 [2024-12-13 04:25:30.234359] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:30.275 04:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80733 00:09:30.534 [2024-12-13 04:25:30.293858] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:30.794 ************************************ 00:09:30.794 END TEST raid_state_function_test_sb 00:09:30.794 ************************************ 00:09:30.794 04:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:30.794 00:09:30.794 real 0m8.869s 00:09:30.794 user 0m14.854s 00:09:30.794 sys 0m1.910s 00:09:30.794 04:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.794 04:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.794 04:25:30 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:09:30.794 04:25:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:30.794 04:25:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.794 04:25:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:30.794 ************************************ 00:09:30.794 START TEST raid_superblock_test 00:09:30.794 ************************************ 00:09:30.794 04:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:09:30.794 04:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:30.794 04:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:30.794 04:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:30.794 04:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:30.794 04:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:30.794 04:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:30.794 04:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:30.794 04:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:30.794 04:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:30.794 04:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:30.794 04:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:30.794 04:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:30.794 04:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:30.794 04:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:30.794 04:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:30.794 04:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81341 00:09:30.794 04:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:30.794 04:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81341 00:09:30.794 04:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81341 ']' 00:09:30.794 04:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.794 04:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:30.794 04:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.794 04:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:30.794 04:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.794 [2024-12-13 04:25:30.776877] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:30.794 [2024-12-13 04:25:30.777572] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81341 ] 00:09:31.054 [2024-12-13 04:25:30.934511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.054 [2024-12-13 04:25:30.973162] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.054 [2024-12-13 04:25:31.049003] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:31.054 [2024-12-13 04:25:31.049042] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:31.622 04:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:31.622 04:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:31.622 04:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:31.622 04:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:31.622 04:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:31.622 04:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:31.622 04:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:31.622 04:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:31.622 04:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:31.622 04:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:31.622 04:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:31.622 04:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.622 04:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.622 malloc1 00:09:31.622 04:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.622 04:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:31.622 04:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.622 04:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.622 [2024-12-13 04:25:31.637354] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:31.622 [2024-12-13 04:25:31.637484] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:31.622 [2024-12-13 04:25:31.637526] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:09:31.622 [2024-12-13 04:25:31.637566] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:31.881 [2024-12-13 04:25:31.639901] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:31.881 [2024-12-13 04:25:31.639986] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:31.881 pt1 00:09:31.881 04:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.881 04:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:31.881 04:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:31.881 04:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:31.881 04:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:31.881 04:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:31.881 04:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:31.881 04:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:31.881 04:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:31.881 04:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:31.881 04:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.881 04:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.881 malloc2 00:09:31.881 04:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.881 04:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:31.881 04:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.881 04:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.881 [2024-12-13 04:25:31.671708] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:31.881 [2024-12-13 04:25:31.671801] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:31.881 [2024-12-13 04:25:31.671824] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:31.881 [2024-12-13 04:25:31.671835] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:31.881 [2024-12-13 04:25:31.674196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:31.881 [2024-12-13 04:25:31.674233] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:31.881 pt2 00:09:31.881 04:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.881 04:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:31.881 04:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:31.881 04:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:31.881 04:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:31.881 04:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:31.881 04:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:31.881 04:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:31.881 04:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:31.881 04:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:31.881 04:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.881 04:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.881 malloc3 00:09:31.881 04:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.881 04:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:31.881 04:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.881 04:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.881 [2024-12-13 04:25:31.709999] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:31.881 [2024-12-13 04:25:31.710109] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:31.881 [2024-12-13 04:25:31.710148] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:31.881 [2024-12-13 04:25:31.710182] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:31.881 [2024-12-13 04:25:31.712549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:31.882 [2024-12-13 04:25:31.712582] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:31.882 pt3 00:09:31.882 04:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.882 04:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:31.882 04:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:31.882 04:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:31.882 04:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.882 04:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.882 [2024-12-13 04:25:31.722051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:31.882 [2024-12-13 04:25:31.724016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:31.882 [2024-12-13 04:25:31.724065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:31.882 [2024-12-13 04:25:31.724212] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:09:31.882 [2024-12-13 04:25:31.724224] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:31.882 [2024-12-13 04:25:31.724509] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:09:31.882 [2024-12-13 04:25:31.724681] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:09:31.882 [2024-12-13 04:25:31.724694] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:09:31.882 [2024-12-13 04:25:31.724817] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:31.882 04:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.882 04:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:31.882 04:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:31.882 04:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:31.882 04:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:31.882 04:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:31.882 04:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.882 04:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.882 04:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.882 04:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.882 04:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.882 04:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.882 04:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:31.882 04:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.882 04:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.882 04:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.882 04:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.882 "name": "raid_bdev1", 00:09:31.882 "uuid": "9503327a-4df9-4e5d-819a-2c397c9df17d", 00:09:31.882 "strip_size_kb": 0, 00:09:31.882 "state": "online", 00:09:31.882 "raid_level": "raid1", 00:09:31.882 "superblock": true, 00:09:31.882 "num_base_bdevs": 3, 00:09:31.882 "num_base_bdevs_discovered": 3, 00:09:31.882 "num_base_bdevs_operational": 3, 00:09:31.882 "base_bdevs_list": [ 00:09:31.882 { 00:09:31.882 "name": "pt1", 00:09:31.882 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:31.882 "is_configured": true, 00:09:31.882 "data_offset": 2048, 00:09:31.882 "data_size": 63488 00:09:31.882 }, 00:09:31.882 { 00:09:31.882 "name": "pt2", 00:09:31.882 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:31.882 "is_configured": true, 00:09:31.882 "data_offset": 2048, 00:09:31.882 "data_size": 63488 00:09:31.882 }, 00:09:31.882 { 00:09:31.882 "name": "pt3", 00:09:31.882 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:31.882 "is_configured": true, 00:09:31.882 "data_offset": 2048, 00:09:31.882 "data_size": 63488 00:09:31.882 } 00:09:31.882 ] 00:09:31.882 }' 00:09:31.882 04:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.882 04:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.142 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:32.142 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:32.142 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:32.142 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:32.142 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:32.142 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:32.402 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:32.402 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:32.402 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.402 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.402 [2024-12-13 04:25:32.165563] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:32.402 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.402 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:32.402 "name": "raid_bdev1", 00:09:32.402 "aliases": [ 00:09:32.402 "9503327a-4df9-4e5d-819a-2c397c9df17d" 00:09:32.402 ], 00:09:32.402 "product_name": "Raid Volume", 00:09:32.402 "block_size": 512, 00:09:32.402 "num_blocks": 63488, 00:09:32.402 "uuid": "9503327a-4df9-4e5d-819a-2c397c9df17d", 00:09:32.402 "assigned_rate_limits": { 00:09:32.402 "rw_ios_per_sec": 0, 00:09:32.402 "rw_mbytes_per_sec": 0, 00:09:32.402 "r_mbytes_per_sec": 0, 00:09:32.402 "w_mbytes_per_sec": 0 00:09:32.402 }, 00:09:32.402 "claimed": false, 00:09:32.402 "zoned": false, 00:09:32.402 "supported_io_types": { 00:09:32.402 "read": true, 00:09:32.402 "write": true, 00:09:32.402 "unmap": false, 00:09:32.402 "flush": false, 00:09:32.402 "reset": true, 00:09:32.402 "nvme_admin": false, 00:09:32.402 "nvme_io": false, 00:09:32.402 "nvme_io_md": false, 00:09:32.402 "write_zeroes": true, 00:09:32.402 "zcopy": false, 00:09:32.402 "get_zone_info": false, 00:09:32.402 "zone_management": false, 00:09:32.402 "zone_append": false, 00:09:32.402 "compare": false, 00:09:32.402 "compare_and_write": false, 00:09:32.402 "abort": false, 00:09:32.402 "seek_hole": false, 00:09:32.402 "seek_data": false, 00:09:32.402 "copy": false, 00:09:32.402 "nvme_iov_md": false 00:09:32.402 }, 00:09:32.402 "memory_domains": [ 00:09:32.402 { 00:09:32.402 "dma_device_id": "system", 00:09:32.402 "dma_device_type": 1 00:09:32.402 }, 00:09:32.402 { 00:09:32.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.402 "dma_device_type": 2 00:09:32.402 }, 00:09:32.402 { 00:09:32.402 "dma_device_id": "system", 00:09:32.402 "dma_device_type": 1 00:09:32.402 }, 00:09:32.402 { 00:09:32.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.402 "dma_device_type": 2 00:09:32.402 }, 00:09:32.402 { 00:09:32.402 "dma_device_id": "system", 00:09:32.402 "dma_device_type": 1 00:09:32.402 }, 00:09:32.402 { 00:09:32.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.402 "dma_device_type": 2 00:09:32.402 } 00:09:32.402 ], 00:09:32.402 "driver_specific": { 00:09:32.402 "raid": { 00:09:32.402 "uuid": "9503327a-4df9-4e5d-819a-2c397c9df17d", 00:09:32.402 "strip_size_kb": 0, 00:09:32.402 "state": "online", 00:09:32.402 "raid_level": "raid1", 00:09:32.402 "superblock": true, 00:09:32.402 "num_base_bdevs": 3, 00:09:32.402 "num_base_bdevs_discovered": 3, 00:09:32.402 "num_base_bdevs_operational": 3, 00:09:32.402 "base_bdevs_list": [ 00:09:32.402 { 00:09:32.402 "name": "pt1", 00:09:32.402 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:32.402 "is_configured": true, 00:09:32.402 "data_offset": 2048, 00:09:32.402 "data_size": 63488 00:09:32.402 }, 00:09:32.402 { 00:09:32.402 "name": "pt2", 00:09:32.402 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:32.402 "is_configured": true, 00:09:32.402 "data_offset": 2048, 00:09:32.402 "data_size": 63488 00:09:32.402 }, 00:09:32.402 { 00:09:32.402 "name": "pt3", 00:09:32.402 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:32.402 "is_configured": true, 00:09:32.402 "data_offset": 2048, 00:09:32.402 "data_size": 63488 00:09:32.402 } 00:09:32.402 ] 00:09:32.402 } 00:09:32.402 } 00:09:32.402 }' 00:09:32.403 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:32.403 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:32.403 pt2 00:09:32.403 pt3' 00:09:32.403 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.403 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:32.403 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:32.403 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:32.403 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.403 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.403 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.403 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.403 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:32.403 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:32.403 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:32.403 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.403 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:32.403 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.403 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.403 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.403 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:32.403 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:32.403 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:32.403 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:32.403 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.403 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.403 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.403 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.403 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:32.403 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:32.403 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:32.403 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.403 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.403 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:32.722 [2024-12-13 04:25:32.417047] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:32.722 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.722 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9503327a-4df9-4e5d-819a-2c397c9df17d 00:09:32.722 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9503327a-4df9-4e5d-819a-2c397c9df17d ']' 00:09:32.722 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:32.722 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.722 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.722 [2024-12-13 04:25:32.464728] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:32.722 [2024-12-13 04:25:32.464750] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:32.722 [2024-12-13 04:25:32.464841] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:32.722 [2024-12-13 04:25:32.464920] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:32.722 [2024-12-13 04:25:32.464939] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:09:32.722 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.722 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.722 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.722 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.722 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:32.722 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.722 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:32.722 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:32.722 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:32.722 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:32.722 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.722 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.722 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.722 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:32.722 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:32.722 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.722 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.722 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.722 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:32.722 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:32.722 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.722 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.722 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.722 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:32.722 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:32.722 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.723 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.723 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.723 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:32.723 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:32.723 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:32.723 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:32.723 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:32.723 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:32.723 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:32.723 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:32.723 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:32.723 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.723 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.723 [2024-12-13 04:25:32.608558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:32.723 [2024-12-13 04:25:32.610703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:32.723 [2024-12-13 04:25:32.610746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:32.723 [2024-12-13 04:25:32.610793] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:32.723 [2024-12-13 04:25:32.610845] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:32.723 [2024-12-13 04:25:32.610865] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:32.723 [2024-12-13 04:25:32.610877] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:32.723 [2024-12-13 04:25:32.610887] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:09:32.723 request: 00:09:32.723 { 00:09:32.723 "name": "raid_bdev1", 00:09:32.723 "raid_level": "raid1", 00:09:32.723 "base_bdevs": [ 00:09:32.723 "malloc1", 00:09:32.723 "malloc2", 00:09:32.723 "malloc3" 00:09:32.723 ], 00:09:32.723 "superblock": false, 00:09:32.723 "method": "bdev_raid_create", 00:09:32.723 "req_id": 1 00:09:32.723 } 00:09:32.723 Got JSON-RPC error response 00:09:32.723 response: 00:09:32.723 { 00:09:32.723 "code": -17, 00:09:32.723 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:32.723 } 00:09:32.723 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:32.723 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:32.723 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:32.723 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:32.723 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:32.723 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:32.723 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.723 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.723 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.723 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.723 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:32.723 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:32.723 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:32.723 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.723 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.723 [2024-12-13 04:25:32.656440] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:32.723 [2024-12-13 04:25:32.656538] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:32.723 [2024-12-13 04:25:32.656569] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:32.723 [2024-12-13 04:25:32.656599] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:32.723 [2024-12-13 04:25:32.658960] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:32.723 [2024-12-13 04:25:32.659032] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:32.723 [2024-12-13 04:25:32.659112] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:32.723 [2024-12-13 04:25:32.659190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:32.723 pt1 00:09:32.723 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.723 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:32.723 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:32.723 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.723 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:32.723 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:32.723 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.723 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.723 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.723 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.723 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.723 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.723 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.723 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.723 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:32.723 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.723 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.723 "name": "raid_bdev1", 00:09:32.723 "uuid": "9503327a-4df9-4e5d-819a-2c397c9df17d", 00:09:32.723 "strip_size_kb": 0, 00:09:32.723 "state": "configuring", 00:09:32.723 "raid_level": "raid1", 00:09:32.723 "superblock": true, 00:09:32.723 "num_base_bdevs": 3, 00:09:32.723 "num_base_bdevs_discovered": 1, 00:09:32.723 "num_base_bdevs_operational": 3, 00:09:32.723 "base_bdevs_list": [ 00:09:32.723 { 00:09:32.723 "name": "pt1", 00:09:32.723 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:32.723 "is_configured": true, 00:09:32.723 "data_offset": 2048, 00:09:32.723 "data_size": 63488 00:09:32.723 }, 00:09:32.723 { 00:09:32.723 "name": null, 00:09:32.723 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:32.723 "is_configured": false, 00:09:32.723 "data_offset": 2048, 00:09:32.723 "data_size": 63488 00:09:32.723 }, 00:09:32.723 { 00:09:32.723 "name": null, 00:09:32.723 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:32.723 "is_configured": false, 00:09:32.723 "data_offset": 2048, 00:09:32.723 "data_size": 63488 00:09:32.723 } 00:09:32.723 ] 00:09:32.723 }' 00:09:32.723 04:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.723 04:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.293 04:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:33.293 04:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:33.293 04:25:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.293 04:25:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.293 [2024-12-13 04:25:33.075692] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:33.293 [2024-12-13 04:25:33.075787] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:33.293 [2024-12-13 04:25:33.075826] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:33.293 [2024-12-13 04:25:33.075869] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:33.293 [2024-12-13 04:25:33.076252] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:33.293 [2024-12-13 04:25:33.076274] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:33.293 [2024-12-13 04:25:33.076333] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:33.293 [2024-12-13 04:25:33.076377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:33.293 pt2 00:09:33.293 04:25:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.293 04:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:33.293 04:25:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.293 04:25:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.293 [2024-12-13 04:25:33.087685] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:33.293 04:25:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.293 04:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:33.293 04:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:33.293 04:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.293 04:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:33.293 04:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:33.293 04:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.293 04:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.293 04:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.293 04:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.293 04:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.293 04:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.293 04:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:33.293 04:25:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.293 04:25:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.293 04:25:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.293 04:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.293 "name": "raid_bdev1", 00:09:33.293 "uuid": "9503327a-4df9-4e5d-819a-2c397c9df17d", 00:09:33.293 "strip_size_kb": 0, 00:09:33.293 "state": "configuring", 00:09:33.293 "raid_level": "raid1", 00:09:33.293 "superblock": true, 00:09:33.293 "num_base_bdevs": 3, 00:09:33.293 "num_base_bdevs_discovered": 1, 00:09:33.293 "num_base_bdevs_operational": 3, 00:09:33.293 "base_bdevs_list": [ 00:09:33.293 { 00:09:33.293 "name": "pt1", 00:09:33.293 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:33.293 "is_configured": true, 00:09:33.293 "data_offset": 2048, 00:09:33.293 "data_size": 63488 00:09:33.293 }, 00:09:33.293 { 00:09:33.293 "name": null, 00:09:33.293 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:33.293 "is_configured": false, 00:09:33.293 "data_offset": 0, 00:09:33.293 "data_size": 63488 00:09:33.293 }, 00:09:33.293 { 00:09:33.293 "name": null, 00:09:33.293 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:33.293 "is_configured": false, 00:09:33.293 "data_offset": 2048, 00:09:33.293 "data_size": 63488 00:09:33.293 } 00:09:33.293 ] 00:09:33.293 }' 00:09:33.293 04:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.293 04:25:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.553 04:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:33.553 04:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:33.553 04:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:33.553 04:25:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.553 04:25:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.554 [2024-12-13 04:25:33.510991] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:33.554 [2024-12-13 04:25:33.511083] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:33.554 [2024-12-13 04:25:33.511120] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:33.554 [2024-12-13 04:25:33.511146] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:33.554 [2024-12-13 04:25:33.511557] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:33.554 [2024-12-13 04:25:33.511627] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:33.554 [2024-12-13 04:25:33.511722] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:33.554 [2024-12-13 04:25:33.511770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:33.554 pt2 00:09:33.554 04:25:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.554 04:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:33.554 04:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:33.554 04:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:33.554 04:25:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.554 04:25:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.554 [2024-12-13 04:25:33.522966] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:33.554 [2024-12-13 04:25:33.523040] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:33.554 [2024-12-13 04:25:33.523073] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:33.554 [2024-12-13 04:25:33.523096] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:33.554 [2024-12-13 04:25:33.523444] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:33.554 [2024-12-13 04:25:33.523512] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:33.554 [2024-12-13 04:25:33.523593] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:33.554 [2024-12-13 04:25:33.523636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:33.554 [2024-12-13 04:25:33.523762] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:33.554 [2024-12-13 04:25:33.523797] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:33.554 [2024-12-13 04:25:33.524063] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:33.554 [2024-12-13 04:25:33.524216] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:33.554 [2024-12-13 04:25:33.524256] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:09:33.554 [2024-12-13 04:25:33.524390] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:33.554 pt3 00:09:33.554 04:25:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.554 04:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:33.554 04:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:33.554 04:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:33.554 04:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:33.554 04:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:33.554 04:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:33.554 04:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:33.554 04:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.554 04:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.554 04:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.554 04:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.554 04:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.554 04:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.554 04:25:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.554 04:25:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.554 04:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:33.554 04:25:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.814 04:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.814 "name": "raid_bdev1", 00:09:33.814 "uuid": "9503327a-4df9-4e5d-819a-2c397c9df17d", 00:09:33.814 "strip_size_kb": 0, 00:09:33.814 "state": "online", 00:09:33.814 "raid_level": "raid1", 00:09:33.814 "superblock": true, 00:09:33.814 "num_base_bdevs": 3, 00:09:33.814 "num_base_bdevs_discovered": 3, 00:09:33.814 "num_base_bdevs_operational": 3, 00:09:33.814 "base_bdevs_list": [ 00:09:33.814 { 00:09:33.814 "name": "pt1", 00:09:33.814 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:33.814 "is_configured": true, 00:09:33.814 "data_offset": 2048, 00:09:33.814 "data_size": 63488 00:09:33.814 }, 00:09:33.814 { 00:09:33.814 "name": "pt2", 00:09:33.814 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:33.814 "is_configured": true, 00:09:33.814 "data_offset": 2048, 00:09:33.814 "data_size": 63488 00:09:33.814 }, 00:09:33.814 { 00:09:33.814 "name": "pt3", 00:09:33.814 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:33.814 "is_configured": true, 00:09:33.814 "data_offset": 2048, 00:09:33.814 "data_size": 63488 00:09:33.814 } 00:09:33.814 ] 00:09:33.814 }' 00:09:33.814 04:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.814 04:25:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.074 04:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:34.074 04:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:34.074 04:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:34.074 04:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:34.074 04:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:34.074 04:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:34.074 04:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:34.074 04:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:34.074 04:25:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.074 04:25:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.074 [2024-12-13 04:25:33.974493] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:34.074 04:25:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.074 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:34.074 "name": "raid_bdev1", 00:09:34.074 "aliases": [ 00:09:34.074 "9503327a-4df9-4e5d-819a-2c397c9df17d" 00:09:34.074 ], 00:09:34.074 "product_name": "Raid Volume", 00:09:34.074 "block_size": 512, 00:09:34.074 "num_blocks": 63488, 00:09:34.074 "uuid": "9503327a-4df9-4e5d-819a-2c397c9df17d", 00:09:34.074 "assigned_rate_limits": { 00:09:34.074 "rw_ios_per_sec": 0, 00:09:34.074 "rw_mbytes_per_sec": 0, 00:09:34.074 "r_mbytes_per_sec": 0, 00:09:34.074 "w_mbytes_per_sec": 0 00:09:34.074 }, 00:09:34.074 "claimed": false, 00:09:34.074 "zoned": false, 00:09:34.074 "supported_io_types": { 00:09:34.074 "read": true, 00:09:34.074 "write": true, 00:09:34.074 "unmap": false, 00:09:34.074 "flush": false, 00:09:34.074 "reset": true, 00:09:34.074 "nvme_admin": false, 00:09:34.074 "nvme_io": false, 00:09:34.074 "nvme_io_md": false, 00:09:34.074 "write_zeroes": true, 00:09:34.074 "zcopy": false, 00:09:34.074 "get_zone_info": false, 00:09:34.074 "zone_management": false, 00:09:34.074 "zone_append": false, 00:09:34.074 "compare": false, 00:09:34.074 "compare_and_write": false, 00:09:34.074 "abort": false, 00:09:34.074 "seek_hole": false, 00:09:34.074 "seek_data": false, 00:09:34.074 "copy": false, 00:09:34.074 "nvme_iov_md": false 00:09:34.074 }, 00:09:34.074 "memory_domains": [ 00:09:34.074 { 00:09:34.074 "dma_device_id": "system", 00:09:34.074 "dma_device_type": 1 00:09:34.074 }, 00:09:34.074 { 00:09:34.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.074 "dma_device_type": 2 00:09:34.074 }, 00:09:34.074 { 00:09:34.074 "dma_device_id": "system", 00:09:34.074 "dma_device_type": 1 00:09:34.074 }, 00:09:34.074 { 00:09:34.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.074 "dma_device_type": 2 00:09:34.074 }, 00:09:34.074 { 00:09:34.074 "dma_device_id": "system", 00:09:34.074 "dma_device_type": 1 00:09:34.074 }, 00:09:34.074 { 00:09:34.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.074 "dma_device_type": 2 00:09:34.074 } 00:09:34.074 ], 00:09:34.074 "driver_specific": { 00:09:34.074 "raid": { 00:09:34.074 "uuid": "9503327a-4df9-4e5d-819a-2c397c9df17d", 00:09:34.074 "strip_size_kb": 0, 00:09:34.074 "state": "online", 00:09:34.074 "raid_level": "raid1", 00:09:34.074 "superblock": true, 00:09:34.074 "num_base_bdevs": 3, 00:09:34.074 "num_base_bdevs_discovered": 3, 00:09:34.074 "num_base_bdevs_operational": 3, 00:09:34.074 "base_bdevs_list": [ 00:09:34.074 { 00:09:34.074 "name": "pt1", 00:09:34.074 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:34.074 "is_configured": true, 00:09:34.074 "data_offset": 2048, 00:09:34.074 "data_size": 63488 00:09:34.074 }, 00:09:34.074 { 00:09:34.074 "name": "pt2", 00:09:34.074 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:34.074 "is_configured": true, 00:09:34.074 "data_offset": 2048, 00:09:34.074 "data_size": 63488 00:09:34.074 }, 00:09:34.074 { 00:09:34.074 "name": "pt3", 00:09:34.074 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:34.074 "is_configured": true, 00:09:34.074 "data_offset": 2048, 00:09:34.074 "data_size": 63488 00:09:34.074 } 00:09:34.074 ] 00:09:34.074 } 00:09:34.074 } 00:09:34.074 }' 00:09:34.074 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:34.074 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:34.074 pt2 00:09:34.074 pt3' 00:09:34.074 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.334 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:34.334 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:34.334 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:34.334 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.334 04:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.334 04:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.334 04:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.334 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:34.334 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:34.335 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:34.335 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:34.335 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.335 04:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.335 04:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.335 04:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.335 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:34.335 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:34.335 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:34.335 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:34.335 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.335 04:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.335 04:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.335 04:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.335 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:34.335 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:34.335 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:34.335 04:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.335 04:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.335 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:34.335 [2024-12-13 04:25:34.257958] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:34.335 04:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.335 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9503327a-4df9-4e5d-819a-2c397c9df17d '!=' 9503327a-4df9-4e5d-819a-2c397c9df17d ']' 00:09:34.335 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:34.335 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:34.335 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:34.335 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:34.335 04:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.335 04:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.335 [2024-12-13 04:25:34.289714] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:34.335 04:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.335 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:34.335 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:34.335 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:34.335 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.335 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.335 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:34.335 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.335 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.335 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.335 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.335 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:34.335 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.335 04:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.335 04:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.335 04:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.335 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.335 "name": "raid_bdev1", 00:09:34.335 "uuid": "9503327a-4df9-4e5d-819a-2c397c9df17d", 00:09:34.335 "strip_size_kb": 0, 00:09:34.335 "state": "online", 00:09:34.335 "raid_level": "raid1", 00:09:34.335 "superblock": true, 00:09:34.335 "num_base_bdevs": 3, 00:09:34.335 "num_base_bdevs_discovered": 2, 00:09:34.335 "num_base_bdevs_operational": 2, 00:09:34.335 "base_bdevs_list": [ 00:09:34.335 { 00:09:34.335 "name": null, 00:09:34.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.335 "is_configured": false, 00:09:34.335 "data_offset": 0, 00:09:34.335 "data_size": 63488 00:09:34.335 }, 00:09:34.335 { 00:09:34.335 "name": "pt2", 00:09:34.335 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:34.335 "is_configured": true, 00:09:34.335 "data_offset": 2048, 00:09:34.335 "data_size": 63488 00:09:34.335 }, 00:09:34.335 { 00:09:34.335 "name": "pt3", 00:09:34.335 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:34.335 "is_configured": true, 00:09:34.335 "data_offset": 2048, 00:09:34.335 "data_size": 63488 00:09:34.335 } 00:09:34.335 ] 00:09:34.335 }' 00:09:34.335 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.335 04:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.905 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:34.905 04:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.905 04:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.905 [2024-12-13 04:25:34.732958] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:34.905 [2024-12-13 04:25:34.733028] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:34.905 [2024-12-13 04:25:34.733129] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:34.905 [2024-12-13 04:25:34.733205] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:34.905 [2024-12-13 04:25:34.733255] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:09:34.905 04:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.905 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.905 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:34.905 04:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.905 04:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.905 04:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.905 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:34.905 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:34.905 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:34.905 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:34.905 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:34.905 04:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.905 04:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.905 04:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.905 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:34.905 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:34.905 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:09:34.905 04:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.905 04:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.905 04:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.905 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:34.905 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:34.905 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:34.905 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:34.905 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:34.905 04:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.905 04:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.905 [2024-12-13 04:25:34.812815] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:34.905 [2024-12-13 04:25:34.812863] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:34.905 [2024-12-13 04:25:34.812888] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:09:34.905 [2024-12-13 04:25:34.812899] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:34.906 [2024-12-13 04:25:34.815413] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:34.906 [2024-12-13 04:25:34.815507] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:34.906 [2024-12-13 04:25:34.815588] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:34.906 [2024-12-13 04:25:34.815623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:34.906 pt2 00:09:34.906 04:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.906 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:34.906 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:34.906 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.906 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.906 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.906 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:34.906 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.906 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.906 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.906 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.906 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.906 04:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.906 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:34.906 04:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.906 04:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.906 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.906 "name": "raid_bdev1", 00:09:34.906 "uuid": "9503327a-4df9-4e5d-819a-2c397c9df17d", 00:09:34.906 "strip_size_kb": 0, 00:09:34.906 "state": "configuring", 00:09:34.906 "raid_level": "raid1", 00:09:34.906 "superblock": true, 00:09:34.906 "num_base_bdevs": 3, 00:09:34.906 "num_base_bdevs_discovered": 1, 00:09:34.906 "num_base_bdevs_operational": 2, 00:09:34.906 "base_bdevs_list": [ 00:09:34.906 { 00:09:34.906 "name": null, 00:09:34.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.906 "is_configured": false, 00:09:34.906 "data_offset": 2048, 00:09:34.906 "data_size": 63488 00:09:34.906 }, 00:09:34.906 { 00:09:34.906 "name": "pt2", 00:09:34.906 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:34.906 "is_configured": true, 00:09:34.906 "data_offset": 2048, 00:09:34.906 "data_size": 63488 00:09:34.906 }, 00:09:34.906 { 00:09:34.906 "name": null, 00:09:34.906 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:34.906 "is_configured": false, 00:09:34.906 "data_offset": 2048, 00:09:34.906 "data_size": 63488 00:09:34.906 } 00:09:34.906 ] 00:09:34.906 }' 00:09:34.906 04:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.906 04:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.475 04:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:09:35.475 04:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:35.475 04:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:09:35.475 04:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:35.475 04:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.475 04:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.475 [2024-12-13 04:25:35.304112] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:35.475 [2024-12-13 04:25:35.304218] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:35.475 [2024-12-13 04:25:35.304262] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:35.475 [2024-12-13 04:25:35.304290] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:35.475 [2024-12-13 04:25:35.304749] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:35.475 [2024-12-13 04:25:35.304806] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:35.475 [2024-12-13 04:25:35.304911] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:35.475 [2024-12-13 04:25:35.304960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:35.475 [2024-12-13 04:25:35.305102] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:35.475 [2024-12-13 04:25:35.305141] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:35.475 [2024-12-13 04:25:35.305435] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:09:35.475 [2024-12-13 04:25:35.305620] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:35.475 [2024-12-13 04:25:35.305666] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:09:35.476 [2024-12-13 04:25:35.305815] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:35.476 pt3 00:09:35.476 04:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.476 04:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:35.476 04:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:35.476 04:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:35.476 04:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:35.476 04:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:35.476 04:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:35.476 04:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.476 04:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.476 04:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.476 04:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.476 04:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.476 04:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:35.476 04:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.476 04:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.476 04:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.476 04:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.476 "name": "raid_bdev1", 00:09:35.476 "uuid": "9503327a-4df9-4e5d-819a-2c397c9df17d", 00:09:35.476 "strip_size_kb": 0, 00:09:35.476 "state": "online", 00:09:35.476 "raid_level": "raid1", 00:09:35.476 "superblock": true, 00:09:35.476 "num_base_bdevs": 3, 00:09:35.476 "num_base_bdevs_discovered": 2, 00:09:35.476 "num_base_bdevs_operational": 2, 00:09:35.476 "base_bdevs_list": [ 00:09:35.476 { 00:09:35.476 "name": null, 00:09:35.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.476 "is_configured": false, 00:09:35.476 "data_offset": 2048, 00:09:35.476 "data_size": 63488 00:09:35.476 }, 00:09:35.476 { 00:09:35.476 "name": "pt2", 00:09:35.476 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:35.476 "is_configured": true, 00:09:35.476 "data_offset": 2048, 00:09:35.476 "data_size": 63488 00:09:35.476 }, 00:09:35.476 { 00:09:35.476 "name": "pt3", 00:09:35.476 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:35.476 "is_configured": true, 00:09:35.476 "data_offset": 2048, 00:09:35.476 "data_size": 63488 00:09:35.476 } 00:09:35.476 ] 00:09:35.476 }' 00:09:35.476 04:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.476 04:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.735 04:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:35.735 04:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.735 04:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.735 [2024-12-13 04:25:35.743414] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:35.735 [2024-12-13 04:25:35.743461] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:35.735 [2024-12-13 04:25:35.743559] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:35.735 [2024-12-13 04:25:35.743628] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:35.735 [2024-12-13 04:25:35.743640] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:09:35.735 04:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.995 04:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.995 04:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:35.995 04:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.995 04:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.995 04:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.995 04:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:35.995 04:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:35.995 04:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:09:35.995 04:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:09:35.995 04:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:09:35.995 04:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.995 04:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.995 04:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.995 04:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:35.995 04:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.995 04:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.995 [2024-12-13 04:25:35.819226] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:35.995 [2024-12-13 04:25:35.819289] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:35.995 [2024-12-13 04:25:35.819306] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:35.995 [2024-12-13 04:25:35.819317] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:35.995 [2024-12-13 04:25:35.821859] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:35.995 [2024-12-13 04:25:35.821940] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:35.995 [2024-12-13 04:25:35.822024] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:35.995 [2024-12-13 04:25:35.822090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:35.995 [2024-12-13 04:25:35.822221] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:35.995 [2024-12-13 04:25:35.822237] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:35.995 [2024-12-13 04:25:35.822251] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:09:35.995 [2024-12-13 04:25:35.822297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:35.995 pt1 00:09:35.995 04:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.995 04:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:09:35.995 04:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:35.995 04:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:35.995 04:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.995 04:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:35.995 04:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:35.995 04:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:35.995 04:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.995 04:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.995 04:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.995 04:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.995 04:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.995 04:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:35.995 04:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.995 04:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.995 04:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.995 04:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.995 "name": "raid_bdev1", 00:09:35.995 "uuid": "9503327a-4df9-4e5d-819a-2c397c9df17d", 00:09:35.995 "strip_size_kb": 0, 00:09:35.995 "state": "configuring", 00:09:35.995 "raid_level": "raid1", 00:09:35.995 "superblock": true, 00:09:35.995 "num_base_bdevs": 3, 00:09:35.995 "num_base_bdevs_discovered": 1, 00:09:35.995 "num_base_bdevs_operational": 2, 00:09:35.995 "base_bdevs_list": [ 00:09:35.995 { 00:09:35.995 "name": null, 00:09:35.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.995 "is_configured": false, 00:09:35.995 "data_offset": 2048, 00:09:35.995 "data_size": 63488 00:09:35.995 }, 00:09:35.995 { 00:09:35.995 "name": "pt2", 00:09:35.995 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:35.995 "is_configured": true, 00:09:35.995 "data_offset": 2048, 00:09:35.996 "data_size": 63488 00:09:35.996 }, 00:09:35.996 { 00:09:35.996 "name": null, 00:09:35.996 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:35.996 "is_configured": false, 00:09:35.996 "data_offset": 2048, 00:09:35.996 "data_size": 63488 00:09:35.996 } 00:09:35.996 ] 00:09:35.996 }' 00:09:35.996 04:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.996 04:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.255 04:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:09:36.255 04:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.255 04:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.255 04:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:36.255 04:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.515 04:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:09:36.515 04:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:36.515 04:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.515 04:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.515 [2024-12-13 04:25:36.302390] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:36.515 [2024-12-13 04:25:36.302498] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:36.515 [2024-12-13 04:25:36.302534] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:09:36.515 [2024-12-13 04:25:36.302564] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:36.515 [2024-12-13 04:25:36.303001] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:36.515 [2024-12-13 04:25:36.303064] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:36.515 [2024-12-13 04:25:36.303166] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:36.515 [2024-12-13 04:25:36.303222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:36.515 [2024-12-13 04:25:36.303352] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:09:36.515 [2024-12-13 04:25:36.303392] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:36.515 [2024-12-13 04:25:36.303689] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:09:36.515 [2024-12-13 04:25:36.303865] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:09:36.515 [2024-12-13 04:25:36.303902] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:09:36.515 [2024-12-13 04:25:36.304052] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:36.515 pt3 00:09:36.515 04:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.515 04:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:36.515 04:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:36.515 04:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:36.515 04:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:36.515 04:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:36.515 04:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:36.515 04:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.515 04:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.515 04:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.515 04:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.515 04:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.515 04:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.515 04:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.515 04:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:36.515 04:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.515 04:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.515 "name": "raid_bdev1", 00:09:36.515 "uuid": "9503327a-4df9-4e5d-819a-2c397c9df17d", 00:09:36.515 "strip_size_kb": 0, 00:09:36.515 "state": "online", 00:09:36.515 "raid_level": "raid1", 00:09:36.515 "superblock": true, 00:09:36.515 "num_base_bdevs": 3, 00:09:36.515 "num_base_bdevs_discovered": 2, 00:09:36.515 "num_base_bdevs_operational": 2, 00:09:36.515 "base_bdevs_list": [ 00:09:36.515 { 00:09:36.515 "name": null, 00:09:36.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.515 "is_configured": false, 00:09:36.515 "data_offset": 2048, 00:09:36.515 "data_size": 63488 00:09:36.515 }, 00:09:36.515 { 00:09:36.515 "name": "pt2", 00:09:36.515 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:36.515 "is_configured": true, 00:09:36.515 "data_offset": 2048, 00:09:36.515 "data_size": 63488 00:09:36.515 }, 00:09:36.515 { 00:09:36.515 "name": "pt3", 00:09:36.515 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:36.515 "is_configured": true, 00:09:36.515 "data_offset": 2048, 00:09:36.515 "data_size": 63488 00:09:36.515 } 00:09:36.515 ] 00:09:36.515 }' 00:09:36.515 04:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.515 04:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.825 04:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:36.825 04:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:36.825 04:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.825 04:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.825 04:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.825 04:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:36.825 04:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:36.825 04:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.825 04:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.825 04:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:36.825 [2024-12-13 04:25:36.777833] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:36.825 04:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.825 04:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 9503327a-4df9-4e5d-819a-2c397c9df17d '!=' 9503327a-4df9-4e5d-819a-2c397c9df17d ']' 00:09:36.825 04:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81341 00:09:36.825 04:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81341 ']' 00:09:36.825 04:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81341 00:09:36.825 04:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:36.825 04:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:37.090 04:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81341 00:09:37.090 04:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:37.090 04:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:37.090 killing process with pid 81341 00:09:37.090 04:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81341' 00:09:37.090 04:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 81341 00:09:37.090 [2024-12-13 04:25:36.870379] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:37.090 [2024-12-13 04:25:36.870466] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:37.090 [2024-12-13 04:25:36.870547] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:37.090 [2024-12-13 04:25:36.870556] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:09:37.090 04:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 81341 00:09:37.090 [2024-12-13 04:25:36.931959] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:37.350 04:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:37.350 00:09:37.350 real 0m6.566s 00:09:37.350 user 0m10.834s 00:09:37.350 sys 0m1.456s 00:09:37.350 ************************************ 00:09:37.350 END TEST raid_superblock_test 00:09:37.350 ************************************ 00:09:37.350 04:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.350 04:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.350 04:25:37 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:09:37.350 04:25:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:37.350 04:25:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.350 04:25:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:37.350 ************************************ 00:09:37.350 START TEST raid_read_error_test 00:09:37.350 ************************************ 00:09:37.350 04:25:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:09:37.351 04:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:37.351 04:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:37.351 04:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:37.351 04:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:37.351 04:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:37.351 04:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:37.351 04:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:37.351 04:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:37.351 04:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:37.351 04:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:37.351 04:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:37.351 04:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:37.351 04:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:37.351 04:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:37.351 04:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:37.351 04:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:37.351 04:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:37.351 04:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:37.351 04:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:37.351 04:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:37.351 04:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:37.351 04:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:37.351 04:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:37.351 04:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:37.351 04:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.x8xyZn0luP 00:09:37.351 04:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=81776 00:09:37.351 04:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:37.351 04:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 81776 00:09:37.351 04:25:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 81776 ']' 00:09:37.351 04:25:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.351 04:25:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:37.351 04:25:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.351 04:25:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:37.351 04:25:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.610 [2024-12-13 04:25:37.434157] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:37.610 [2024-12-13 04:25:37.434282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81776 ] 00:09:37.610 [2024-12-13 04:25:37.589463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.870 [2024-12-13 04:25:37.627892] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.870 [2024-12-13 04:25:37.703149] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:37.870 [2024-12-13 04:25:37.703190] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.440 BaseBdev1_malloc 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.440 true 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.440 [2024-12-13 04:25:38.287636] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:38.440 [2024-12-13 04:25:38.287694] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:38.440 [2024-12-13 04:25:38.287727] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:09:38.440 [2024-12-13 04:25:38.287742] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:38.440 [2024-12-13 04:25:38.290207] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:38.440 [2024-12-13 04:25:38.290305] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:38.440 BaseBdev1 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.440 BaseBdev2_malloc 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.440 true 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.440 [2024-12-13 04:25:38.334318] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:38.440 [2024-12-13 04:25:38.334368] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:38.440 [2024-12-13 04:25:38.334389] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:09:38.440 [2024-12-13 04:25:38.334407] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:38.440 [2024-12-13 04:25:38.336836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:38.440 [2024-12-13 04:25:38.336874] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:38.440 BaseBdev2 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.440 BaseBdev3_malloc 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.440 true 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.440 [2024-12-13 04:25:38.380747] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:38.440 [2024-12-13 04:25:38.380790] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:38.440 [2024-12-13 04:25:38.380811] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:09:38.440 [2024-12-13 04:25:38.380819] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:38.440 [2024-12-13 04:25:38.383195] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:38.440 [2024-12-13 04:25:38.383231] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:38.440 BaseBdev3 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.440 [2024-12-13 04:25:38.392772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:38.440 [2024-12-13 04:25:38.394864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:38.440 [2024-12-13 04:25:38.394936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:38.440 [2024-12-13 04:25:38.395116] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:38.440 [2024-12-13 04:25:38.395131] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:38.440 [2024-12-13 04:25:38.395376] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002bb0 00:09:38.440 [2024-12-13 04:25:38.395539] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:38.440 [2024-12-13 04:25:38.395550] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:09:38.440 [2024-12-13 04:25:38.395705] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.440 04:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.440 "name": "raid_bdev1", 00:09:38.440 "uuid": "a720bfd3-7d83-468a-b40b-99c5a45f4ef1", 00:09:38.440 "strip_size_kb": 0, 00:09:38.440 "state": "online", 00:09:38.440 "raid_level": "raid1", 00:09:38.440 "superblock": true, 00:09:38.440 "num_base_bdevs": 3, 00:09:38.440 "num_base_bdevs_discovered": 3, 00:09:38.440 "num_base_bdevs_operational": 3, 00:09:38.440 "base_bdevs_list": [ 00:09:38.440 { 00:09:38.440 "name": "BaseBdev1", 00:09:38.440 "uuid": "67502f33-dea3-5edf-a552-644c9396ac82", 00:09:38.440 "is_configured": true, 00:09:38.440 "data_offset": 2048, 00:09:38.440 "data_size": 63488 00:09:38.440 }, 00:09:38.440 { 00:09:38.440 "name": "BaseBdev2", 00:09:38.440 "uuid": "8d8d22d0-ef2c-5108-8c7f-ac8848b7fa03", 00:09:38.440 "is_configured": true, 00:09:38.440 "data_offset": 2048, 00:09:38.440 "data_size": 63488 00:09:38.440 }, 00:09:38.440 { 00:09:38.440 "name": "BaseBdev3", 00:09:38.440 "uuid": "6773c4d2-d8b4-5f3d-b689-87a22123120f", 00:09:38.440 "is_configured": true, 00:09:38.440 "data_offset": 2048, 00:09:38.441 "data_size": 63488 00:09:38.441 } 00:09:38.441 ] 00:09:38.441 }' 00:09:38.441 04:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.441 04:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.010 04:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:39.010 04:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:39.010 [2024-12-13 04:25:38.908474] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002d50 00:09:39.950 04:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:39.950 04:25:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.950 04:25:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.950 04:25:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.950 04:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:39.950 04:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:39.950 04:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:39.950 04:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:39.950 04:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:39.950 04:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:39.950 04:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:39.950 04:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:39.950 04:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:39.950 04:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.950 04:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.950 04:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.950 04:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.950 04:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.950 04:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.950 04:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:39.950 04:25:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.950 04:25:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.950 04:25:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.950 04:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.950 "name": "raid_bdev1", 00:09:39.950 "uuid": "a720bfd3-7d83-468a-b40b-99c5a45f4ef1", 00:09:39.950 "strip_size_kb": 0, 00:09:39.950 "state": "online", 00:09:39.950 "raid_level": "raid1", 00:09:39.950 "superblock": true, 00:09:39.950 "num_base_bdevs": 3, 00:09:39.950 "num_base_bdevs_discovered": 3, 00:09:39.950 "num_base_bdevs_operational": 3, 00:09:39.950 "base_bdevs_list": [ 00:09:39.950 { 00:09:39.950 "name": "BaseBdev1", 00:09:39.950 "uuid": "67502f33-dea3-5edf-a552-644c9396ac82", 00:09:39.950 "is_configured": true, 00:09:39.950 "data_offset": 2048, 00:09:39.950 "data_size": 63488 00:09:39.950 }, 00:09:39.950 { 00:09:39.950 "name": "BaseBdev2", 00:09:39.950 "uuid": "8d8d22d0-ef2c-5108-8c7f-ac8848b7fa03", 00:09:39.950 "is_configured": true, 00:09:39.950 "data_offset": 2048, 00:09:39.950 "data_size": 63488 00:09:39.950 }, 00:09:39.950 { 00:09:39.950 "name": "BaseBdev3", 00:09:39.950 "uuid": "6773c4d2-d8b4-5f3d-b689-87a22123120f", 00:09:39.950 "is_configured": true, 00:09:39.950 "data_offset": 2048, 00:09:39.950 "data_size": 63488 00:09:39.950 } 00:09:39.950 ] 00:09:39.950 }' 00:09:39.950 04:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.950 04:25:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.520 04:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:40.520 04:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.520 04:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.520 [2024-12-13 04:25:40.300676] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:40.520 [2024-12-13 04:25:40.300791] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:40.520 [2024-12-13 04:25:40.303589] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:40.520 [2024-12-13 04:25:40.303648] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:40.520 [2024-12-13 04:25:40.303759] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:40.520 [2024-12-13 04:25:40.303773] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:09:40.520 { 00:09:40.520 "results": [ 00:09:40.520 { 00:09:40.520 "job": "raid_bdev1", 00:09:40.520 "core_mask": "0x1", 00:09:40.520 "workload": "randrw", 00:09:40.520 "percentage": 50, 00:09:40.520 "status": "finished", 00:09:40.520 "queue_depth": 1, 00:09:40.520 "io_size": 131072, 00:09:40.520 "runtime": 1.393064, 00:09:40.520 "iops": 11145.216587321185, 00:09:40.520 "mibps": 1393.1520734151482, 00:09:40.520 "io_failed": 0, 00:09:40.520 "io_timeout": 0, 00:09:40.520 "avg_latency_us": 87.15718262702877, 00:09:40.520 "min_latency_us": 23.14061135371179, 00:09:40.520 "max_latency_us": 1387.989519650655 00:09:40.520 } 00:09:40.520 ], 00:09:40.520 "core_count": 1 00:09:40.520 } 00:09:40.520 04:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.520 04:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 81776 00:09:40.520 04:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 81776 ']' 00:09:40.520 04:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 81776 00:09:40.520 04:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:40.520 04:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:40.520 04:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81776 00:09:40.520 04:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:40.520 killing process with pid 81776 00:09:40.520 04:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:40.520 04:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81776' 00:09:40.520 04:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 81776 00:09:40.520 [2024-12-13 04:25:40.332668] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:40.520 04:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 81776 00:09:40.520 [2024-12-13 04:25:40.381842] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:40.780 04:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.x8xyZn0luP 00:09:40.780 04:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:40.780 04:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:40.780 04:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:40.780 04:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:40.780 04:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:40.780 04:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:40.780 04:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:40.780 00:09:40.780 real 0m3.385s 00:09:40.780 user 0m4.189s 00:09:40.780 sys 0m0.596s 00:09:40.780 04:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.780 ************************************ 00:09:40.780 END TEST raid_read_error_test 00:09:40.780 ************************************ 00:09:40.780 04:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.780 04:25:40 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:09:40.780 04:25:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:40.780 04:25:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.780 04:25:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:40.780 ************************************ 00:09:40.780 START TEST raid_write_error_test 00:09:40.780 ************************************ 00:09:40.780 04:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:09:40.780 04:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:40.780 04:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:40.780 04:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:41.040 04:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:41.040 04:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:41.040 04:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:41.040 04:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:41.040 04:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:41.040 04:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:41.040 04:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:41.040 04:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:41.040 04:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:41.040 04:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:41.040 04:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:41.040 04:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:41.040 04:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:41.040 04:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:41.040 04:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:41.040 04:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:41.040 04:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:41.040 04:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:41.040 04:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:41.040 04:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:41.040 04:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:41.040 04:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.q39AoLwYzH 00:09:41.040 04:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=81905 00:09:41.040 04:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:41.040 04:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 81905 00:09:41.040 04:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 81905 ']' 00:09:41.040 04:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.040 04:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:41.040 04:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.040 04:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:41.041 04:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.041 [2024-12-13 04:25:40.893898] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:41.041 [2024-12-13 04:25:40.894088] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81905 ] 00:09:41.041 [2024-12-13 04:25:41.046763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.303 [2024-12-13 04:25:41.085216] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.303 [2024-12-13 04:25:41.161701] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:41.303 [2024-12-13 04:25:41.161738] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.874 BaseBdev1_malloc 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.874 true 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.874 [2024-12-13 04:25:41.754571] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:41.874 [2024-12-13 04:25:41.754631] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:41.874 [2024-12-13 04:25:41.754653] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:09:41.874 [2024-12-13 04:25:41.754662] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:41.874 [2024-12-13 04:25:41.757165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:41.874 [2024-12-13 04:25:41.757213] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:41.874 BaseBdev1 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.874 BaseBdev2_malloc 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.874 true 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.874 [2024-12-13 04:25:41.800999] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:41.874 [2024-12-13 04:25:41.801047] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:41.874 [2024-12-13 04:25:41.801069] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:09:41.874 [2024-12-13 04:25:41.801087] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:41.874 [2024-12-13 04:25:41.803432] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:41.874 [2024-12-13 04:25:41.803544] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:41.874 BaseBdev2 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.874 BaseBdev3_malloc 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.874 true 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.874 [2024-12-13 04:25:41.847341] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:41.874 [2024-12-13 04:25:41.847432] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:41.874 [2024-12-13 04:25:41.847489] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:09:41.874 [2024-12-13 04:25:41.847499] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:41.874 [2024-12-13 04:25:41.849873] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:41.874 [2024-12-13 04:25:41.849909] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:41.874 BaseBdev3 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.874 [2024-12-13 04:25:41.859379] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:41.874 [2024-12-13 04:25:41.861503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:41.874 [2024-12-13 04:25:41.861628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:41.874 [2024-12-13 04:25:41.861837] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:41.874 [2024-12-13 04:25:41.861853] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:41.874 [2024-12-13 04:25:41.862141] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002bb0 00:09:41.874 [2024-12-13 04:25:41.862299] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:41.874 [2024-12-13 04:25:41.862310] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:09:41.874 [2024-12-13 04:25:41.862436] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.874 04:25:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.134 04:25:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.134 04:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.134 "name": "raid_bdev1", 00:09:42.134 "uuid": "2a966b20-5f83-4a9c-966a-c5022941b0a0", 00:09:42.134 "strip_size_kb": 0, 00:09:42.134 "state": "online", 00:09:42.134 "raid_level": "raid1", 00:09:42.134 "superblock": true, 00:09:42.134 "num_base_bdevs": 3, 00:09:42.134 "num_base_bdevs_discovered": 3, 00:09:42.134 "num_base_bdevs_operational": 3, 00:09:42.134 "base_bdevs_list": [ 00:09:42.134 { 00:09:42.134 "name": "BaseBdev1", 00:09:42.134 "uuid": "d15e6c02-21d9-5299-9e75-f1735eac1ea2", 00:09:42.134 "is_configured": true, 00:09:42.134 "data_offset": 2048, 00:09:42.134 "data_size": 63488 00:09:42.134 }, 00:09:42.134 { 00:09:42.134 "name": "BaseBdev2", 00:09:42.134 "uuid": "f723d1a7-ddcf-5798-83d2-d1022ebc3d1e", 00:09:42.134 "is_configured": true, 00:09:42.134 "data_offset": 2048, 00:09:42.134 "data_size": 63488 00:09:42.134 }, 00:09:42.134 { 00:09:42.134 "name": "BaseBdev3", 00:09:42.134 "uuid": "43317275-86e4-544a-b119-6d4119ad49d1", 00:09:42.134 "is_configured": true, 00:09:42.134 "data_offset": 2048, 00:09:42.134 "data_size": 63488 00:09:42.134 } 00:09:42.134 ] 00:09:42.134 }' 00:09:42.134 04:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.134 04:25:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.393 04:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:42.393 04:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:42.393 [2024-12-13 04:25:42.394905] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002d50 00:09:43.332 04:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:43.332 04:25:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.332 04:25:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.332 [2024-12-13 04:25:43.318504] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:43.332 [2024-12-13 04:25:43.318666] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:43.332 [2024-12-13 04:25:43.318931] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002d50 00:09:43.332 04:25:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.332 04:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:43.332 04:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:43.332 04:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:43.332 04:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:09:43.332 04:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:43.332 04:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:43.332 04:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:43.332 04:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:43.332 04:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:43.332 04:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:43.332 04:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.332 04:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.332 04:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.332 04:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.332 04:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.333 04:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:43.333 04:25:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.333 04:25:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.592 04:25:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.592 04:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.592 "name": "raid_bdev1", 00:09:43.592 "uuid": "2a966b20-5f83-4a9c-966a-c5022941b0a0", 00:09:43.592 "strip_size_kb": 0, 00:09:43.592 "state": "online", 00:09:43.592 "raid_level": "raid1", 00:09:43.592 "superblock": true, 00:09:43.592 "num_base_bdevs": 3, 00:09:43.592 "num_base_bdevs_discovered": 2, 00:09:43.592 "num_base_bdevs_operational": 2, 00:09:43.592 "base_bdevs_list": [ 00:09:43.592 { 00:09:43.592 "name": null, 00:09:43.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.592 "is_configured": false, 00:09:43.592 "data_offset": 0, 00:09:43.592 "data_size": 63488 00:09:43.592 }, 00:09:43.592 { 00:09:43.592 "name": "BaseBdev2", 00:09:43.592 "uuid": "f723d1a7-ddcf-5798-83d2-d1022ebc3d1e", 00:09:43.592 "is_configured": true, 00:09:43.592 "data_offset": 2048, 00:09:43.592 "data_size": 63488 00:09:43.592 }, 00:09:43.592 { 00:09:43.592 "name": "BaseBdev3", 00:09:43.592 "uuid": "43317275-86e4-544a-b119-6d4119ad49d1", 00:09:43.592 "is_configured": true, 00:09:43.592 "data_offset": 2048, 00:09:43.592 "data_size": 63488 00:09:43.592 } 00:09:43.592 ] 00:09:43.592 }' 00:09:43.592 04:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.592 04:25:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.852 04:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:43.852 04:25:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.852 04:25:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.852 [2024-12-13 04:25:43.777330] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:43.852 [2024-12-13 04:25:43.777368] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:43.852 [2024-12-13 04:25:43.779898] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:43.852 [2024-12-13 04:25:43.779951] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:43.852 [2024-12-13 04:25:43.780050] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:43.852 [2024-12-13 04:25:43.780060] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:09:43.852 { 00:09:43.852 "results": [ 00:09:43.852 { 00:09:43.852 "job": "raid_bdev1", 00:09:43.852 "core_mask": "0x1", 00:09:43.852 "workload": "randrw", 00:09:43.852 "percentage": 50, 00:09:43.852 "status": "finished", 00:09:43.852 "queue_depth": 1, 00:09:43.852 "io_size": 131072, 00:09:43.852 "runtime": 1.382941, 00:09:43.852 "iops": 12666.483964247209, 00:09:43.852 "mibps": 1583.310495530901, 00:09:43.852 "io_failed": 0, 00:09:43.852 "io_timeout": 0, 00:09:43.852 "avg_latency_us": 76.30927610433582, 00:09:43.852 "min_latency_us": 22.581659388646287, 00:09:43.852 "max_latency_us": 1423.7624454148472 00:09:43.852 } 00:09:43.852 ], 00:09:43.852 "core_count": 1 00:09:43.852 } 00:09:43.852 04:25:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.852 04:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 81905 00:09:43.852 04:25:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 81905 ']' 00:09:43.852 04:25:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 81905 00:09:43.852 04:25:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:43.852 04:25:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:43.852 04:25:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81905 00:09:43.852 04:25:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:43.852 killing process with pid 81905 00:09:43.852 04:25:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:43.852 04:25:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81905' 00:09:43.852 04:25:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 81905 00:09:43.852 [2024-12-13 04:25:43.819797] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:43.852 04:25:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 81905 00:09:44.112 [2024-12-13 04:25:43.866889] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:44.373 04:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.q39AoLwYzH 00:09:44.373 04:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:44.373 04:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:44.373 04:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:44.373 04:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:44.373 04:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:44.373 04:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:44.373 ************************************ 00:09:44.373 END TEST raid_write_error_test 00:09:44.373 ************************************ 00:09:44.373 04:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:44.373 00:09:44.373 real 0m3.409s 00:09:44.373 user 0m4.198s 00:09:44.373 sys 0m0.621s 00:09:44.373 04:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.373 04:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.373 04:25:44 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:44.373 04:25:44 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:44.373 04:25:44 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:09:44.373 04:25:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:44.373 04:25:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.373 04:25:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:44.373 ************************************ 00:09:44.373 START TEST raid_state_function_test 00:09:44.373 ************************************ 00:09:44.373 04:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:09:44.373 04:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:44.373 04:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:44.373 04:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:44.373 04:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:44.373 04:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:44.373 04:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:44.373 04:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:44.373 04:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:44.373 04:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:44.373 04:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:44.373 04:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:44.373 04:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:44.373 04:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:44.373 04:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:44.373 04:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:44.373 04:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:44.373 04:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:44.373 04:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:44.373 04:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:44.373 04:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:44.373 04:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:44.373 04:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:44.373 04:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:44.373 04:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:44.373 04:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:44.373 04:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:44.373 04:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:44.373 04:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:44.373 04:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:44.373 04:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82032 00:09:44.373 04:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:44.373 04:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82032' 00:09:44.373 Process raid pid: 82032 00:09:44.373 04:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82032 00:09:44.373 04:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 82032 ']' 00:09:44.373 04:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.373 04:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:44.373 04:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.373 04:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:44.373 04:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.373 [2024-12-13 04:25:44.367055] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:44.373 [2024-12-13 04:25:44.367646] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:44.633 [2024-12-13 04:25:44.523802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.633 [2024-12-13 04:25:44.564009] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.633 [2024-12-13 04:25:44.640327] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:44.633 [2024-12-13 04:25:44.640372] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:45.205 04:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:45.205 04:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:45.205 04:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:45.205 04:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.205 04:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.205 [2024-12-13 04:25:45.206328] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:45.205 [2024-12-13 04:25:45.206490] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:45.205 [2024-12-13 04:25:45.206507] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:45.205 [2024-12-13 04:25:45.206518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:45.205 [2024-12-13 04:25:45.206524] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:45.205 [2024-12-13 04:25:45.206537] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:45.205 [2024-12-13 04:25:45.206543] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:45.205 [2024-12-13 04:25:45.206553] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:45.205 04:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.205 04:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:45.205 04:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.205 04:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.205 04:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:45.205 04:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.205 04:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:45.205 04:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.205 04:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.205 04:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.205 04:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.205 04:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.205 04:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.205 04:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.205 04:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.463 04:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.463 04:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.463 "name": "Existed_Raid", 00:09:45.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.463 "strip_size_kb": 64, 00:09:45.463 "state": "configuring", 00:09:45.463 "raid_level": "raid0", 00:09:45.463 "superblock": false, 00:09:45.463 "num_base_bdevs": 4, 00:09:45.463 "num_base_bdevs_discovered": 0, 00:09:45.463 "num_base_bdevs_operational": 4, 00:09:45.463 "base_bdevs_list": [ 00:09:45.463 { 00:09:45.463 "name": "BaseBdev1", 00:09:45.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.463 "is_configured": false, 00:09:45.463 "data_offset": 0, 00:09:45.463 "data_size": 0 00:09:45.463 }, 00:09:45.463 { 00:09:45.463 "name": "BaseBdev2", 00:09:45.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.463 "is_configured": false, 00:09:45.463 "data_offset": 0, 00:09:45.463 "data_size": 0 00:09:45.463 }, 00:09:45.463 { 00:09:45.463 "name": "BaseBdev3", 00:09:45.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.463 "is_configured": false, 00:09:45.463 "data_offset": 0, 00:09:45.463 "data_size": 0 00:09:45.463 }, 00:09:45.463 { 00:09:45.463 "name": "BaseBdev4", 00:09:45.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.463 "is_configured": false, 00:09:45.463 "data_offset": 0, 00:09:45.463 "data_size": 0 00:09:45.463 } 00:09:45.463 ] 00:09:45.463 }' 00:09:45.463 04:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.463 04:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.721 04:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:45.721 04:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.721 04:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.721 [2024-12-13 04:25:45.617532] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:45.721 [2024-12-13 04:25:45.617644] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:09:45.721 04:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.721 04:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:45.722 04:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.722 04:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.722 [2024-12-13 04:25:45.629526] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:45.722 [2024-12-13 04:25:45.629617] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:45.722 [2024-12-13 04:25:45.629643] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:45.722 [2024-12-13 04:25:45.629665] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:45.722 [2024-12-13 04:25:45.629682] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:45.722 [2024-12-13 04:25:45.629703] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:45.722 [2024-12-13 04:25:45.629719] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:45.722 [2024-12-13 04:25:45.629740] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:45.722 04:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.722 04:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:45.722 04:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.722 04:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.722 [2024-12-13 04:25:45.656532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:45.722 BaseBdev1 00:09:45.722 04:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.722 04:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:45.722 04:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:45.722 04:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:45.722 04:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:45.722 04:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:45.722 04:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:45.722 04:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:45.722 04:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.722 04:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.722 04:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.722 04:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:45.722 04:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.722 04:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.722 [ 00:09:45.722 { 00:09:45.722 "name": "BaseBdev1", 00:09:45.722 "aliases": [ 00:09:45.722 "d11eba6a-6e16-4fc5-85df-de9389d977f2" 00:09:45.722 ], 00:09:45.722 "product_name": "Malloc disk", 00:09:45.722 "block_size": 512, 00:09:45.722 "num_blocks": 65536, 00:09:45.722 "uuid": "d11eba6a-6e16-4fc5-85df-de9389d977f2", 00:09:45.722 "assigned_rate_limits": { 00:09:45.722 "rw_ios_per_sec": 0, 00:09:45.722 "rw_mbytes_per_sec": 0, 00:09:45.722 "r_mbytes_per_sec": 0, 00:09:45.722 "w_mbytes_per_sec": 0 00:09:45.722 }, 00:09:45.722 "claimed": true, 00:09:45.722 "claim_type": "exclusive_write", 00:09:45.722 "zoned": false, 00:09:45.722 "supported_io_types": { 00:09:45.722 "read": true, 00:09:45.722 "write": true, 00:09:45.722 "unmap": true, 00:09:45.722 "flush": true, 00:09:45.722 "reset": true, 00:09:45.722 "nvme_admin": false, 00:09:45.722 "nvme_io": false, 00:09:45.722 "nvme_io_md": false, 00:09:45.722 "write_zeroes": true, 00:09:45.722 "zcopy": true, 00:09:45.722 "get_zone_info": false, 00:09:45.722 "zone_management": false, 00:09:45.722 "zone_append": false, 00:09:45.722 "compare": false, 00:09:45.722 "compare_and_write": false, 00:09:45.722 "abort": true, 00:09:45.722 "seek_hole": false, 00:09:45.722 "seek_data": false, 00:09:45.722 "copy": true, 00:09:45.722 "nvme_iov_md": false 00:09:45.722 }, 00:09:45.722 "memory_domains": [ 00:09:45.722 { 00:09:45.722 "dma_device_id": "system", 00:09:45.722 "dma_device_type": 1 00:09:45.722 }, 00:09:45.722 { 00:09:45.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.722 "dma_device_type": 2 00:09:45.722 } 00:09:45.722 ], 00:09:45.722 "driver_specific": {} 00:09:45.722 } 00:09:45.722 ] 00:09:45.722 04:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.722 04:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:45.722 04:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:45.722 04:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.722 04:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.722 04:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:45.722 04:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.722 04:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:45.722 04:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.722 04:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.722 04:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.722 04:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.722 04:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.722 04:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.722 04:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.722 04:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.722 04:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.981 04:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.981 "name": "Existed_Raid", 00:09:45.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.981 "strip_size_kb": 64, 00:09:45.981 "state": "configuring", 00:09:45.981 "raid_level": "raid0", 00:09:45.981 "superblock": false, 00:09:45.981 "num_base_bdevs": 4, 00:09:45.981 "num_base_bdevs_discovered": 1, 00:09:45.981 "num_base_bdevs_operational": 4, 00:09:45.981 "base_bdevs_list": [ 00:09:45.981 { 00:09:45.982 "name": "BaseBdev1", 00:09:45.982 "uuid": "d11eba6a-6e16-4fc5-85df-de9389d977f2", 00:09:45.982 "is_configured": true, 00:09:45.982 "data_offset": 0, 00:09:45.982 "data_size": 65536 00:09:45.982 }, 00:09:45.982 { 00:09:45.982 "name": "BaseBdev2", 00:09:45.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.982 "is_configured": false, 00:09:45.982 "data_offset": 0, 00:09:45.982 "data_size": 0 00:09:45.982 }, 00:09:45.982 { 00:09:45.982 "name": "BaseBdev3", 00:09:45.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.982 "is_configured": false, 00:09:45.982 "data_offset": 0, 00:09:45.982 "data_size": 0 00:09:45.982 }, 00:09:45.982 { 00:09:45.982 "name": "BaseBdev4", 00:09:45.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.982 "is_configured": false, 00:09:45.982 "data_offset": 0, 00:09:45.982 "data_size": 0 00:09:45.982 } 00:09:45.982 ] 00:09:45.982 }' 00:09:45.982 04:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.982 04:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.242 04:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:46.242 04:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.242 04:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.242 [2024-12-13 04:25:46.119696] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:46.242 [2024-12-13 04:25:46.119743] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:09:46.242 04:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.242 04:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:46.242 04:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.242 04:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.242 [2024-12-13 04:25:46.131733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:46.242 [2024-12-13 04:25:46.133940] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:46.242 [2024-12-13 04:25:46.134017] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:46.242 [2024-12-13 04:25:46.134045] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:46.242 [2024-12-13 04:25:46.134067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:46.242 [2024-12-13 04:25:46.134085] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:46.242 [2024-12-13 04:25:46.134105] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:46.242 04:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.242 04:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:46.242 04:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:46.242 04:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:46.242 04:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.242 04:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.242 04:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:46.242 04:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.242 04:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:46.242 04:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.242 04:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.242 04:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.242 04:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.242 04:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.242 04:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.242 04:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.242 04:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.242 04:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.242 04:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.242 "name": "Existed_Raid", 00:09:46.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.242 "strip_size_kb": 64, 00:09:46.242 "state": "configuring", 00:09:46.242 "raid_level": "raid0", 00:09:46.242 "superblock": false, 00:09:46.242 "num_base_bdevs": 4, 00:09:46.242 "num_base_bdevs_discovered": 1, 00:09:46.242 "num_base_bdevs_operational": 4, 00:09:46.242 "base_bdevs_list": [ 00:09:46.242 { 00:09:46.242 "name": "BaseBdev1", 00:09:46.242 "uuid": "d11eba6a-6e16-4fc5-85df-de9389d977f2", 00:09:46.242 "is_configured": true, 00:09:46.242 "data_offset": 0, 00:09:46.242 "data_size": 65536 00:09:46.242 }, 00:09:46.242 { 00:09:46.242 "name": "BaseBdev2", 00:09:46.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.242 "is_configured": false, 00:09:46.242 "data_offset": 0, 00:09:46.242 "data_size": 0 00:09:46.242 }, 00:09:46.242 { 00:09:46.242 "name": "BaseBdev3", 00:09:46.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.242 "is_configured": false, 00:09:46.242 "data_offset": 0, 00:09:46.242 "data_size": 0 00:09:46.242 }, 00:09:46.242 { 00:09:46.242 "name": "BaseBdev4", 00:09:46.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.242 "is_configured": false, 00:09:46.242 "data_offset": 0, 00:09:46.242 "data_size": 0 00:09:46.242 } 00:09:46.242 ] 00:09:46.242 }' 00:09:46.242 04:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.242 04:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.813 04:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:46.813 04:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.813 04:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.813 [2024-12-13 04:25:46.535846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:46.813 BaseBdev2 00:09:46.813 04:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.813 04:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:46.813 04:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:46.813 04:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:46.813 04:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:46.813 04:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:46.813 04:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:46.813 04:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:46.813 04:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.813 04:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.813 04:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.813 04:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:46.813 04:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.813 04:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.813 [ 00:09:46.813 { 00:09:46.813 "name": "BaseBdev2", 00:09:46.813 "aliases": [ 00:09:46.813 "f7cd0f6e-b9bc-4d9d-b1dd-f8314a007ed1" 00:09:46.813 ], 00:09:46.813 "product_name": "Malloc disk", 00:09:46.813 "block_size": 512, 00:09:46.813 "num_blocks": 65536, 00:09:46.813 "uuid": "f7cd0f6e-b9bc-4d9d-b1dd-f8314a007ed1", 00:09:46.813 "assigned_rate_limits": { 00:09:46.813 "rw_ios_per_sec": 0, 00:09:46.813 "rw_mbytes_per_sec": 0, 00:09:46.813 "r_mbytes_per_sec": 0, 00:09:46.813 "w_mbytes_per_sec": 0 00:09:46.813 }, 00:09:46.813 "claimed": true, 00:09:46.813 "claim_type": "exclusive_write", 00:09:46.813 "zoned": false, 00:09:46.813 "supported_io_types": { 00:09:46.813 "read": true, 00:09:46.813 "write": true, 00:09:46.813 "unmap": true, 00:09:46.813 "flush": true, 00:09:46.813 "reset": true, 00:09:46.813 "nvme_admin": false, 00:09:46.813 "nvme_io": false, 00:09:46.813 "nvme_io_md": false, 00:09:46.813 "write_zeroes": true, 00:09:46.813 "zcopy": true, 00:09:46.813 "get_zone_info": false, 00:09:46.813 "zone_management": false, 00:09:46.813 "zone_append": false, 00:09:46.813 "compare": false, 00:09:46.813 "compare_and_write": false, 00:09:46.813 "abort": true, 00:09:46.813 "seek_hole": false, 00:09:46.813 "seek_data": false, 00:09:46.813 "copy": true, 00:09:46.813 "nvme_iov_md": false 00:09:46.813 }, 00:09:46.813 "memory_domains": [ 00:09:46.813 { 00:09:46.813 "dma_device_id": "system", 00:09:46.813 "dma_device_type": 1 00:09:46.813 }, 00:09:46.813 { 00:09:46.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.813 "dma_device_type": 2 00:09:46.813 } 00:09:46.813 ], 00:09:46.813 "driver_specific": {} 00:09:46.813 } 00:09:46.813 ] 00:09:46.813 04:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.813 04:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:46.813 04:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:46.813 04:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:46.813 04:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:46.813 04:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.813 04:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.813 04:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:46.813 04:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.813 04:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:46.813 04:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.813 04:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.813 04:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.813 04:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.813 04:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.813 04:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.813 04:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.813 04:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.813 04:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.813 04:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.813 "name": "Existed_Raid", 00:09:46.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.813 "strip_size_kb": 64, 00:09:46.813 "state": "configuring", 00:09:46.813 "raid_level": "raid0", 00:09:46.813 "superblock": false, 00:09:46.813 "num_base_bdevs": 4, 00:09:46.813 "num_base_bdevs_discovered": 2, 00:09:46.813 "num_base_bdevs_operational": 4, 00:09:46.813 "base_bdevs_list": [ 00:09:46.813 { 00:09:46.813 "name": "BaseBdev1", 00:09:46.813 "uuid": "d11eba6a-6e16-4fc5-85df-de9389d977f2", 00:09:46.813 "is_configured": true, 00:09:46.813 "data_offset": 0, 00:09:46.813 "data_size": 65536 00:09:46.813 }, 00:09:46.813 { 00:09:46.813 "name": "BaseBdev2", 00:09:46.813 "uuid": "f7cd0f6e-b9bc-4d9d-b1dd-f8314a007ed1", 00:09:46.813 "is_configured": true, 00:09:46.813 "data_offset": 0, 00:09:46.813 "data_size": 65536 00:09:46.813 }, 00:09:46.813 { 00:09:46.813 "name": "BaseBdev3", 00:09:46.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.813 "is_configured": false, 00:09:46.813 "data_offset": 0, 00:09:46.813 "data_size": 0 00:09:46.813 }, 00:09:46.813 { 00:09:46.813 "name": "BaseBdev4", 00:09:46.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.813 "is_configured": false, 00:09:46.813 "data_offset": 0, 00:09:46.813 "data_size": 0 00:09:46.813 } 00:09:46.813 ] 00:09:46.813 }' 00:09:46.813 04:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.813 04:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.073 04:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:47.073 04:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.073 04:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.073 [2024-12-13 04:25:47.022260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:47.073 BaseBdev3 00:09:47.073 04:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.073 04:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:47.073 04:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:47.073 04:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:47.073 04:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:47.073 04:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:47.073 04:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:47.074 04:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:47.074 04:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.074 04:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.074 04:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.074 04:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:47.074 04:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.074 04:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.074 [ 00:09:47.074 { 00:09:47.074 "name": "BaseBdev3", 00:09:47.074 "aliases": [ 00:09:47.074 "9d3693c2-3a0f-49db-b601-430791d5508d" 00:09:47.074 ], 00:09:47.074 "product_name": "Malloc disk", 00:09:47.074 "block_size": 512, 00:09:47.074 "num_blocks": 65536, 00:09:47.074 "uuid": "9d3693c2-3a0f-49db-b601-430791d5508d", 00:09:47.074 "assigned_rate_limits": { 00:09:47.074 "rw_ios_per_sec": 0, 00:09:47.074 "rw_mbytes_per_sec": 0, 00:09:47.074 "r_mbytes_per_sec": 0, 00:09:47.074 "w_mbytes_per_sec": 0 00:09:47.074 }, 00:09:47.074 "claimed": true, 00:09:47.074 "claim_type": "exclusive_write", 00:09:47.074 "zoned": false, 00:09:47.074 "supported_io_types": { 00:09:47.074 "read": true, 00:09:47.074 "write": true, 00:09:47.074 "unmap": true, 00:09:47.074 "flush": true, 00:09:47.074 "reset": true, 00:09:47.074 "nvme_admin": false, 00:09:47.074 "nvme_io": false, 00:09:47.074 "nvme_io_md": false, 00:09:47.074 "write_zeroes": true, 00:09:47.074 "zcopy": true, 00:09:47.074 "get_zone_info": false, 00:09:47.074 "zone_management": false, 00:09:47.074 "zone_append": false, 00:09:47.074 "compare": false, 00:09:47.074 "compare_and_write": false, 00:09:47.074 "abort": true, 00:09:47.074 "seek_hole": false, 00:09:47.074 "seek_data": false, 00:09:47.074 "copy": true, 00:09:47.074 "nvme_iov_md": false 00:09:47.074 }, 00:09:47.074 "memory_domains": [ 00:09:47.074 { 00:09:47.074 "dma_device_id": "system", 00:09:47.074 "dma_device_type": 1 00:09:47.074 }, 00:09:47.074 { 00:09:47.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.074 "dma_device_type": 2 00:09:47.074 } 00:09:47.074 ], 00:09:47.074 "driver_specific": {} 00:09:47.074 } 00:09:47.074 ] 00:09:47.074 04:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.074 04:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:47.074 04:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:47.074 04:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:47.074 04:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:47.074 04:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.074 04:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.074 04:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:47.074 04:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.074 04:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:47.074 04:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.074 04:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.074 04:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.074 04:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.074 04:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.074 04:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.074 04:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.074 04:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.074 04:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.334 04:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.334 "name": "Existed_Raid", 00:09:47.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.334 "strip_size_kb": 64, 00:09:47.334 "state": "configuring", 00:09:47.334 "raid_level": "raid0", 00:09:47.334 "superblock": false, 00:09:47.334 "num_base_bdevs": 4, 00:09:47.334 "num_base_bdevs_discovered": 3, 00:09:47.334 "num_base_bdevs_operational": 4, 00:09:47.334 "base_bdevs_list": [ 00:09:47.334 { 00:09:47.334 "name": "BaseBdev1", 00:09:47.334 "uuid": "d11eba6a-6e16-4fc5-85df-de9389d977f2", 00:09:47.334 "is_configured": true, 00:09:47.334 "data_offset": 0, 00:09:47.334 "data_size": 65536 00:09:47.334 }, 00:09:47.334 { 00:09:47.334 "name": "BaseBdev2", 00:09:47.334 "uuid": "f7cd0f6e-b9bc-4d9d-b1dd-f8314a007ed1", 00:09:47.334 "is_configured": true, 00:09:47.334 "data_offset": 0, 00:09:47.334 "data_size": 65536 00:09:47.334 }, 00:09:47.334 { 00:09:47.334 "name": "BaseBdev3", 00:09:47.334 "uuid": "9d3693c2-3a0f-49db-b601-430791d5508d", 00:09:47.334 "is_configured": true, 00:09:47.334 "data_offset": 0, 00:09:47.334 "data_size": 65536 00:09:47.334 }, 00:09:47.334 { 00:09:47.334 "name": "BaseBdev4", 00:09:47.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.334 "is_configured": false, 00:09:47.334 "data_offset": 0, 00:09:47.334 "data_size": 0 00:09:47.334 } 00:09:47.334 ] 00:09:47.334 }' 00:09:47.334 04:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.334 04:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.593 04:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:47.593 04:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.593 04:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.593 [2024-12-13 04:25:47.514128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:47.593 [2024-12-13 04:25:47.514229] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:47.593 [2024-12-13 04:25:47.514254] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:47.593 [2024-12-13 04:25:47.514629] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:47.593 [2024-12-13 04:25:47.514809] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:47.593 [2024-12-13 04:25:47.514823] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:09:47.593 [2024-12-13 04:25:47.515045] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:47.593 BaseBdev4 00:09:47.593 04:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.593 04:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:47.593 04:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:47.593 04:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:47.593 04:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:47.593 04:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:47.593 04:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:47.594 04:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:47.594 04:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.594 04:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.594 04:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.594 04:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:47.594 04:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.594 04:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.594 [ 00:09:47.594 { 00:09:47.594 "name": "BaseBdev4", 00:09:47.594 "aliases": [ 00:09:47.594 "ed0ab0db-c553-4234-9db5-c085cbfb69fa" 00:09:47.594 ], 00:09:47.594 "product_name": "Malloc disk", 00:09:47.594 "block_size": 512, 00:09:47.594 "num_blocks": 65536, 00:09:47.594 "uuid": "ed0ab0db-c553-4234-9db5-c085cbfb69fa", 00:09:47.594 "assigned_rate_limits": { 00:09:47.594 "rw_ios_per_sec": 0, 00:09:47.594 "rw_mbytes_per_sec": 0, 00:09:47.594 "r_mbytes_per_sec": 0, 00:09:47.594 "w_mbytes_per_sec": 0 00:09:47.594 }, 00:09:47.594 "claimed": true, 00:09:47.594 "claim_type": "exclusive_write", 00:09:47.594 "zoned": false, 00:09:47.594 "supported_io_types": { 00:09:47.594 "read": true, 00:09:47.594 "write": true, 00:09:47.594 "unmap": true, 00:09:47.594 "flush": true, 00:09:47.594 "reset": true, 00:09:47.594 "nvme_admin": false, 00:09:47.594 "nvme_io": false, 00:09:47.594 "nvme_io_md": false, 00:09:47.594 "write_zeroes": true, 00:09:47.594 "zcopy": true, 00:09:47.594 "get_zone_info": false, 00:09:47.594 "zone_management": false, 00:09:47.594 "zone_append": false, 00:09:47.594 "compare": false, 00:09:47.594 "compare_and_write": false, 00:09:47.594 "abort": true, 00:09:47.594 "seek_hole": false, 00:09:47.594 "seek_data": false, 00:09:47.594 "copy": true, 00:09:47.594 "nvme_iov_md": false 00:09:47.594 }, 00:09:47.594 "memory_domains": [ 00:09:47.594 { 00:09:47.594 "dma_device_id": "system", 00:09:47.594 "dma_device_type": 1 00:09:47.594 }, 00:09:47.594 { 00:09:47.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.594 "dma_device_type": 2 00:09:47.594 } 00:09:47.594 ], 00:09:47.594 "driver_specific": {} 00:09:47.594 } 00:09:47.594 ] 00:09:47.594 04:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.594 04:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:47.594 04:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:47.594 04:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:47.594 04:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:47.594 04:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.594 04:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:47.594 04:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:47.594 04:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.594 04:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:47.594 04:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.594 04:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.594 04:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.594 04:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.594 04:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.594 04:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.594 04:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.594 04:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.594 04:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.594 04:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.594 "name": "Existed_Raid", 00:09:47.594 "uuid": "caa83af8-5f24-4c7c-a011-1fbf6998c4a8", 00:09:47.594 "strip_size_kb": 64, 00:09:47.594 "state": "online", 00:09:47.594 "raid_level": "raid0", 00:09:47.594 "superblock": false, 00:09:47.594 "num_base_bdevs": 4, 00:09:47.594 "num_base_bdevs_discovered": 4, 00:09:47.594 "num_base_bdevs_operational": 4, 00:09:47.594 "base_bdevs_list": [ 00:09:47.594 { 00:09:47.594 "name": "BaseBdev1", 00:09:47.594 "uuid": "d11eba6a-6e16-4fc5-85df-de9389d977f2", 00:09:47.594 "is_configured": true, 00:09:47.594 "data_offset": 0, 00:09:47.594 "data_size": 65536 00:09:47.594 }, 00:09:47.594 { 00:09:47.594 "name": "BaseBdev2", 00:09:47.594 "uuid": "f7cd0f6e-b9bc-4d9d-b1dd-f8314a007ed1", 00:09:47.594 "is_configured": true, 00:09:47.594 "data_offset": 0, 00:09:47.594 "data_size": 65536 00:09:47.594 }, 00:09:47.594 { 00:09:47.594 "name": "BaseBdev3", 00:09:47.594 "uuid": "9d3693c2-3a0f-49db-b601-430791d5508d", 00:09:47.594 "is_configured": true, 00:09:47.594 "data_offset": 0, 00:09:47.594 "data_size": 65536 00:09:47.594 }, 00:09:47.594 { 00:09:47.594 "name": "BaseBdev4", 00:09:47.594 "uuid": "ed0ab0db-c553-4234-9db5-c085cbfb69fa", 00:09:47.594 "is_configured": true, 00:09:47.594 "data_offset": 0, 00:09:47.594 "data_size": 65536 00:09:47.594 } 00:09:47.594 ] 00:09:47.594 }' 00:09:47.594 04:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.594 04:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.288 04:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:48.288 04:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:48.288 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:48.288 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:48.288 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:48.288 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:48.289 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:48.289 04:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.289 04:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.289 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:48.289 [2024-12-13 04:25:48.013740] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:48.289 04:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.289 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:48.289 "name": "Existed_Raid", 00:09:48.289 "aliases": [ 00:09:48.289 "caa83af8-5f24-4c7c-a011-1fbf6998c4a8" 00:09:48.289 ], 00:09:48.289 "product_name": "Raid Volume", 00:09:48.289 "block_size": 512, 00:09:48.289 "num_blocks": 262144, 00:09:48.289 "uuid": "caa83af8-5f24-4c7c-a011-1fbf6998c4a8", 00:09:48.289 "assigned_rate_limits": { 00:09:48.289 "rw_ios_per_sec": 0, 00:09:48.289 "rw_mbytes_per_sec": 0, 00:09:48.289 "r_mbytes_per_sec": 0, 00:09:48.289 "w_mbytes_per_sec": 0 00:09:48.289 }, 00:09:48.289 "claimed": false, 00:09:48.289 "zoned": false, 00:09:48.289 "supported_io_types": { 00:09:48.289 "read": true, 00:09:48.289 "write": true, 00:09:48.289 "unmap": true, 00:09:48.289 "flush": true, 00:09:48.289 "reset": true, 00:09:48.289 "nvme_admin": false, 00:09:48.289 "nvme_io": false, 00:09:48.289 "nvme_io_md": false, 00:09:48.289 "write_zeroes": true, 00:09:48.289 "zcopy": false, 00:09:48.289 "get_zone_info": false, 00:09:48.289 "zone_management": false, 00:09:48.289 "zone_append": false, 00:09:48.289 "compare": false, 00:09:48.289 "compare_and_write": false, 00:09:48.289 "abort": false, 00:09:48.289 "seek_hole": false, 00:09:48.289 "seek_data": false, 00:09:48.289 "copy": false, 00:09:48.289 "nvme_iov_md": false 00:09:48.289 }, 00:09:48.289 "memory_domains": [ 00:09:48.289 { 00:09:48.289 "dma_device_id": "system", 00:09:48.289 "dma_device_type": 1 00:09:48.289 }, 00:09:48.289 { 00:09:48.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.289 "dma_device_type": 2 00:09:48.289 }, 00:09:48.289 { 00:09:48.289 "dma_device_id": "system", 00:09:48.289 "dma_device_type": 1 00:09:48.289 }, 00:09:48.289 { 00:09:48.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.289 "dma_device_type": 2 00:09:48.289 }, 00:09:48.289 { 00:09:48.289 "dma_device_id": "system", 00:09:48.289 "dma_device_type": 1 00:09:48.289 }, 00:09:48.289 { 00:09:48.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.289 "dma_device_type": 2 00:09:48.289 }, 00:09:48.289 { 00:09:48.289 "dma_device_id": "system", 00:09:48.289 "dma_device_type": 1 00:09:48.289 }, 00:09:48.289 { 00:09:48.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.289 "dma_device_type": 2 00:09:48.289 } 00:09:48.289 ], 00:09:48.289 "driver_specific": { 00:09:48.289 "raid": { 00:09:48.289 "uuid": "caa83af8-5f24-4c7c-a011-1fbf6998c4a8", 00:09:48.289 "strip_size_kb": 64, 00:09:48.289 "state": "online", 00:09:48.289 "raid_level": "raid0", 00:09:48.289 "superblock": false, 00:09:48.289 "num_base_bdevs": 4, 00:09:48.289 "num_base_bdevs_discovered": 4, 00:09:48.289 "num_base_bdevs_operational": 4, 00:09:48.289 "base_bdevs_list": [ 00:09:48.289 { 00:09:48.289 "name": "BaseBdev1", 00:09:48.289 "uuid": "d11eba6a-6e16-4fc5-85df-de9389d977f2", 00:09:48.289 "is_configured": true, 00:09:48.289 "data_offset": 0, 00:09:48.289 "data_size": 65536 00:09:48.289 }, 00:09:48.289 { 00:09:48.289 "name": "BaseBdev2", 00:09:48.289 "uuid": "f7cd0f6e-b9bc-4d9d-b1dd-f8314a007ed1", 00:09:48.289 "is_configured": true, 00:09:48.289 "data_offset": 0, 00:09:48.289 "data_size": 65536 00:09:48.289 }, 00:09:48.289 { 00:09:48.289 "name": "BaseBdev3", 00:09:48.289 "uuid": "9d3693c2-3a0f-49db-b601-430791d5508d", 00:09:48.289 "is_configured": true, 00:09:48.289 "data_offset": 0, 00:09:48.289 "data_size": 65536 00:09:48.289 }, 00:09:48.289 { 00:09:48.289 "name": "BaseBdev4", 00:09:48.289 "uuid": "ed0ab0db-c553-4234-9db5-c085cbfb69fa", 00:09:48.289 "is_configured": true, 00:09:48.289 "data_offset": 0, 00:09:48.289 "data_size": 65536 00:09:48.289 } 00:09:48.289 ] 00:09:48.289 } 00:09:48.289 } 00:09:48.289 }' 00:09:48.289 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:48.289 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:48.289 BaseBdev2 00:09:48.289 BaseBdev3 00:09:48.289 BaseBdev4' 00:09:48.289 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.289 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:48.289 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.289 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:48.289 04:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.289 04:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.289 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.289 04:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.289 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.289 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.289 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.289 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:48.289 04:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.289 04:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.289 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.289 04:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.289 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.289 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.289 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.289 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:48.289 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.289 04:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.289 04:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.289 04:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.289 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.289 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.289 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.289 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:48.289 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.289 04:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.289 04:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.564 04:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.564 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.564 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.564 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:48.564 04:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.564 04:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.564 [2024-12-13 04:25:48.328817] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:48.564 [2024-12-13 04:25:48.328848] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:48.564 [2024-12-13 04:25:48.328914] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:48.564 04:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.564 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:48.564 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:48.564 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:48.564 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:48.564 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:48.564 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:48.564 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.564 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:48.564 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:48.564 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.564 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.564 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.564 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.564 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.564 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.564 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.564 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.564 04:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.564 04:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.564 04:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.564 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.565 "name": "Existed_Raid", 00:09:48.565 "uuid": "caa83af8-5f24-4c7c-a011-1fbf6998c4a8", 00:09:48.565 "strip_size_kb": 64, 00:09:48.565 "state": "offline", 00:09:48.565 "raid_level": "raid0", 00:09:48.565 "superblock": false, 00:09:48.565 "num_base_bdevs": 4, 00:09:48.565 "num_base_bdevs_discovered": 3, 00:09:48.565 "num_base_bdevs_operational": 3, 00:09:48.565 "base_bdevs_list": [ 00:09:48.565 { 00:09:48.565 "name": null, 00:09:48.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.565 "is_configured": false, 00:09:48.565 "data_offset": 0, 00:09:48.565 "data_size": 65536 00:09:48.565 }, 00:09:48.565 { 00:09:48.565 "name": "BaseBdev2", 00:09:48.565 "uuid": "f7cd0f6e-b9bc-4d9d-b1dd-f8314a007ed1", 00:09:48.565 "is_configured": true, 00:09:48.565 "data_offset": 0, 00:09:48.565 "data_size": 65536 00:09:48.565 }, 00:09:48.565 { 00:09:48.565 "name": "BaseBdev3", 00:09:48.565 "uuid": "9d3693c2-3a0f-49db-b601-430791d5508d", 00:09:48.565 "is_configured": true, 00:09:48.565 "data_offset": 0, 00:09:48.565 "data_size": 65536 00:09:48.565 }, 00:09:48.565 { 00:09:48.565 "name": "BaseBdev4", 00:09:48.565 "uuid": "ed0ab0db-c553-4234-9db5-c085cbfb69fa", 00:09:48.565 "is_configured": true, 00:09:48.565 "data_offset": 0, 00:09:48.565 "data_size": 65536 00:09:48.565 } 00:09:48.565 ] 00:09:48.565 }' 00:09:48.565 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.565 04:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.823 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:48.823 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:48.823 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.823 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:48.823 04:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.823 04:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.823 04:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.823 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:48.824 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:48.824 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:48.824 04:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.824 04:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.083 [2024-12-13 04:25:48.844655] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:49.083 04:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.083 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:49.083 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:49.083 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.083 04:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.083 04:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.083 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:49.083 04:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.083 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:49.083 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:49.083 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:49.083 04:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.083 04:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.083 [2024-12-13 04:25:48.924909] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:49.083 04:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.083 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:49.083 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:49.083 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.083 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:49.083 04:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.083 04:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.083 04:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.083 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:49.083 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:49.083 04:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:49.083 04:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.083 04:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.083 [2024-12-13 04:25:49.001265] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:49.083 [2024-12-13 04:25:49.001364] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:09:49.083 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.083 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:49.083 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:49.084 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.084 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:49.084 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.084 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.084 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.084 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:49.084 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:49.084 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:49.084 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:49.084 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:49.084 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:49.084 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.084 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.084 BaseBdev2 00:09:49.084 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.084 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:49.084 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:49.084 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:49.084 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:49.084 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:49.084 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:49.084 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:49.084 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.084 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.343 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.343 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:49.343 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.343 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.343 [ 00:09:49.343 { 00:09:49.343 "name": "BaseBdev2", 00:09:49.343 "aliases": [ 00:09:49.343 "8976553e-3ace-4efa-b8b3-5ee3d597aba6" 00:09:49.343 ], 00:09:49.343 "product_name": "Malloc disk", 00:09:49.343 "block_size": 512, 00:09:49.343 "num_blocks": 65536, 00:09:49.343 "uuid": "8976553e-3ace-4efa-b8b3-5ee3d597aba6", 00:09:49.343 "assigned_rate_limits": { 00:09:49.343 "rw_ios_per_sec": 0, 00:09:49.343 "rw_mbytes_per_sec": 0, 00:09:49.343 "r_mbytes_per_sec": 0, 00:09:49.343 "w_mbytes_per_sec": 0 00:09:49.343 }, 00:09:49.343 "claimed": false, 00:09:49.343 "zoned": false, 00:09:49.343 "supported_io_types": { 00:09:49.343 "read": true, 00:09:49.343 "write": true, 00:09:49.343 "unmap": true, 00:09:49.343 "flush": true, 00:09:49.343 "reset": true, 00:09:49.343 "nvme_admin": false, 00:09:49.343 "nvme_io": false, 00:09:49.343 "nvme_io_md": false, 00:09:49.343 "write_zeroes": true, 00:09:49.343 "zcopy": true, 00:09:49.343 "get_zone_info": false, 00:09:49.343 "zone_management": false, 00:09:49.343 "zone_append": false, 00:09:49.343 "compare": false, 00:09:49.343 "compare_and_write": false, 00:09:49.343 "abort": true, 00:09:49.343 "seek_hole": false, 00:09:49.343 "seek_data": false, 00:09:49.343 "copy": true, 00:09:49.343 "nvme_iov_md": false 00:09:49.343 }, 00:09:49.344 "memory_domains": [ 00:09:49.344 { 00:09:49.344 "dma_device_id": "system", 00:09:49.344 "dma_device_type": 1 00:09:49.344 }, 00:09:49.344 { 00:09:49.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.344 "dma_device_type": 2 00:09:49.344 } 00:09:49.344 ], 00:09:49.344 "driver_specific": {} 00:09:49.344 } 00:09:49.344 ] 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.344 BaseBdev3 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.344 [ 00:09:49.344 { 00:09:49.344 "name": "BaseBdev3", 00:09:49.344 "aliases": [ 00:09:49.344 "28f32da7-7360-4493-be7b-14e846d58bc9" 00:09:49.344 ], 00:09:49.344 "product_name": "Malloc disk", 00:09:49.344 "block_size": 512, 00:09:49.344 "num_blocks": 65536, 00:09:49.344 "uuid": "28f32da7-7360-4493-be7b-14e846d58bc9", 00:09:49.344 "assigned_rate_limits": { 00:09:49.344 "rw_ios_per_sec": 0, 00:09:49.344 "rw_mbytes_per_sec": 0, 00:09:49.344 "r_mbytes_per_sec": 0, 00:09:49.344 "w_mbytes_per_sec": 0 00:09:49.344 }, 00:09:49.344 "claimed": false, 00:09:49.344 "zoned": false, 00:09:49.344 "supported_io_types": { 00:09:49.344 "read": true, 00:09:49.344 "write": true, 00:09:49.344 "unmap": true, 00:09:49.344 "flush": true, 00:09:49.344 "reset": true, 00:09:49.344 "nvme_admin": false, 00:09:49.344 "nvme_io": false, 00:09:49.344 "nvme_io_md": false, 00:09:49.344 "write_zeroes": true, 00:09:49.344 "zcopy": true, 00:09:49.344 "get_zone_info": false, 00:09:49.344 "zone_management": false, 00:09:49.344 "zone_append": false, 00:09:49.344 "compare": false, 00:09:49.344 "compare_and_write": false, 00:09:49.344 "abort": true, 00:09:49.344 "seek_hole": false, 00:09:49.344 "seek_data": false, 00:09:49.344 "copy": true, 00:09:49.344 "nvme_iov_md": false 00:09:49.344 }, 00:09:49.344 "memory_domains": [ 00:09:49.344 { 00:09:49.344 "dma_device_id": "system", 00:09:49.344 "dma_device_type": 1 00:09:49.344 }, 00:09:49.344 { 00:09:49.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.344 "dma_device_type": 2 00:09:49.344 } 00:09:49.344 ], 00:09:49.344 "driver_specific": {} 00:09:49.344 } 00:09:49.344 ] 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.344 BaseBdev4 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.344 [ 00:09:49.344 { 00:09:49.344 "name": "BaseBdev4", 00:09:49.344 "aliases": [ 00:09:49.344 "bd39784f-e29e-4e1f-919a-6d3a6fc9d6c5" 00:09:49.344 ], 00:09:49.344 "product_name": "Malloc disk", 00:09:49.344 "block_size": 512, 00:09:49.344 "num_blocks": 65536, 00:09:49.344 "uuid": "bd39784f-e29e-4e1f-919a-6d3a6fc9d6c5", 00:09:49.344 "assigned_rate_limits": { 00:09:49.344 "rw_ios_per_sec": 0, 00:09:49.344 "rw_mbytes_per_sec": 0, 00:09:49.344 "r_mbytes_per_sec": 0, 00:09:49.344 "w_mbytes_per_sec": 0 00:09:49.344 }, 00:09:49.344 "claimed": false, 00:09:49.344 "zoned": false, 00:09:49.344 "supported_io_types": { 00:09:49.344 "read": true, 00:09:49.344 "write": true, 00:09:49.344 "unmap": true, 00:09:49.344 "flush": true, 00:09:49.344 "reset": true, 00:09:49.344 "nvme_admin": false, 00:09:49.344 "nvme_io": false, 00:09:49.344 "nvme_io_md": false, 00:09:49.344 "write_zeroes": true, 00:09:49.344 "zcopy": true, 00:09:49.344 "get_zone_info": false, 00:09:49.344 "zone_management": false, 00:09:49.344 "zone_append": false, 00:09:49.344 "compare": false, 00:09:49.344 "compare_and_write": false, 00:09:49.344 "abort": true, 00:09:49.344 "seek_hole": false, 00:09:49.344 "seek_data": false, 00:09:49.344 "copy": true, 00:09:49.344 "nvme_iov_md": false 00:09:49.344 }, 00:09:49.344 "memory_domains": [ 00:09:49.344 { 00:09:49.344 "dma_device_id": "system", 00:09:49.344 "dma_device_type": 1 00:09:49.344 }, 00:09:49.344 { 00:09:49.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.344 "dma_device_type": 2 00:09:49.344 } 00:09:49.344 ], 00:09:49.344 "driver_specific": {} 00:09:49.344 } 00:09:49.344 ] 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.344 [2024-12-13 04:25:49.258485] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:49.344 [2024-12-13 04:25:49.258587] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:49.344 [2024-12-13 04:25:49.258652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:49.344 [2024-12-13 04:25:49.260774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:49.344 [2024-12-13 04:25:49.260863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:49.344 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.345 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.345 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:49.345 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.345 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:49.345 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.345 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.345 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.345 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.345 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.345 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.345 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.345 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.345 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.345 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.345 "name": "Existed_Raid", 00:09:49.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.345 "strip_size_kb": 64, 00:09:49.345 "state": "configuring", 00:09:49.345 "raid_level": "raid0", 00:09:49.345 "superblock": false, 00:09:49.345 "num_base_bdevs": 4, 00:09:49.345 "num_base_bdevs_discovered": 3, 00:09:49.345 "num_base_bdevs_operational": 4, 00:09:49.345 "base_bdevs_list": [ 00:09:49.345 { 00:09:49.345 "name": "BaseBdev1", 00:09:49.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.345 "is_configured": false, 00:09:49.345 "data_offset": 0, 00:09:49.345 "data_size": 0 00:09:49.345 }, 00:09:49.345 { 00:09:49.345 "name": "BaseBdev2", 00:09:49.345 "uuid": "8976553e-3ace-4efa-b8b3-5ee3d597aba6", 00:09:49.345 "is_configured": true, 00:09:49.345 "data_offset": 0, 00:09:49.345 "data_size": 65536 00:09:49.345 }, 00:09:49.345 { 00:09:49.345 "name": "BaseBdev3", 00:09:49.345 "uuid": "28f32da7-7360-4493-be7b-14e846d58bc9", 00:09:49.345 "is_configured": true, 00:09:49.345 "data_offset": 0, 00:09:49.345 "data_size": 65536 00:09:49.345 }, 00:09:49.345 { 00:09:49.345 "name": "BaseBdev4", 00:09:49.345 "uuid": "bd39784f-e29e-4e1f-919a-6d3a6fc9d6c5", 00:09:49.345 "is_configured": true, 00:09:49.345 "data_offset": 0, 00:09:49.345 "data_size": 65536 00:09:49.345 } 00:09:49.345 ] 00:09:49.345 }' 00:09:49.345 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.345 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.913 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:49.913 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.913 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.913 [2024-12-13 04:25:49.717667] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:49.913 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.913 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:49.913 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.913 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.913 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:49.913 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.913 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:49.913 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.913 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.913 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.913 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.913 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.913 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.913 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.913 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.913 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.913 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.913 "name": "Existed_Raid", 00:09:49.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.913 "strip_size_kb": 64, 00:09:49.913 "state": "configuring", 00:09:49.913 "raid_level": "raid0", 00:09:49.913 "superblock": false, 00:09:49.913 "num_base_bdevs": 4, 00:09:49.913 "num_base_bdevs_discovered": 2, 00:09:49.913 "num_base_bdevs_operational": 4, 00:09:49.913 "base_bdevs_list": [ 00:09:49.913 { 00:09:49.913 "name": "BaseBdev1", 00:09:49.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.913 "is_configured": false, 00:09:49.913 "data_offset": 0, 00:09:49.914 "data_size": 0 00:09:49.914 }, 00:09:49.914 { 00:09:49.914 "name": null, 00:09:49.914 "uuid": "8976553e-3ace-4efa-b8b3-5ee3d597aba6", 00:09:49.914 "is_configured": false, 00:09:49.914 "data_offset": 0, 00:09:49.914 "data_size": 65536 00:09:49.914 }, 00:09:49.914 { 00:09:49.914 "name": "BaseBdev3", 00:09:49.914 "uuid": "28f32da7-7360-4493-be7b-14e846d58bc9", 00:09:49.914 "is_configured": true, 00:09:49.914 "data_offset": 0, 00:09:49.914 "data_size": 65536 00:09:49.914 }, 00:09:49.914 { 00:09:49.914 "name": "BaseBdev4", 00:09:49.914 "uuid": "bd39784f-e29e-4e1f-919a-6d3a6fc9d6c5", 00:09:49.914 "is_configured": true, 00:09:49.914 "data_offset": 0, 00:09:49.914 "data_size": 65536 00:09:49.914 } 00:09:49.914 ] 00:09:49.914 }' 00:09:49.914 04:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.914 04:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.172 04:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.172 04:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:50.172 04:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.172 04:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.173 04:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.173 04:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:50.173 04:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:50.173 04:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.173 04:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.173 [2024-12-13 04:25:50.181679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:50.173 BaseBdev1 00:09:50.173 04:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.173 04:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:50.173 04:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:50.173 04:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:50.173 04:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:50.173 04:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:50.173 04:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:50.173 04:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:50.173 04:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.173 04:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.431 04:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.431 04:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:50.431 04:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.431 04:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.431 [ 00:09:50.431 { 00:09:50.431 "name": "BaseBdev1", 00:09:50.431 "aliases": [ 00:09:50.431 "7f18f536-0caf-43a5-8db2-7d9628b04ab4" 00:09:50.431 ], 00:09:50.431 "product_name": "Malloc disk", 00:09:50.431 "block_size": 512, 00:09:50.431 "num_blocks": 65536, 00:09:50.431 "uuid": "7f18f536-0caf-43a5-8db2-7d9628b04ab4", 00:09:50.431 "assigned_rate_limits": { 00:09:50.431 "rw_ios_per_sec": 0, 00:09:50.431 "rw_mbytes_per_sec": 0, 00:09:50.431 "r_mbytes_per_sec": 0, 00:09:50.431 "w_mbytes_per_sec": 0 00:09:50.431 }, 00:09:50.431 "claimed": true, 00:09:50.431 "claim_type": "exclusive_write", 00:09:50.431 "zoned": false, 00:09:50.431 "supported_io_types": { 00:09:50.431 "read": true, 00:09:50.431 "write": true, 00:09:50.431 "unmap": true, 00:09:50.431 "flush": true, 00:09:50.431 "reset": true, 00:09:50.431 "nvme_admin": false, 00:09:50.431 "nvme_io": false, 00:09:50.431 "nvme_io_md": false, 00:09:50.431 "write_zeroes": true, 00:09:50.431 "zcopy": true, 00:09:50.431 "get_zone_info": false, 00:09:50.431 "zone_management": false, 00:09:50.431 "zone_append": false, 00:09:50.431 "compare": false, 00:09:50.431 "compare_and_write": false, 00:09:50.431 "abort": true, 00:09:50.431 "seek_hole": false, 00:09:50.431 "seek_data": false, 00:09:50.431 "copy": true, 00:09:50.431 "nvme_iov_md": false 00:09:50.431 }, 00:09:50.431 "memory_domains": [ 00:09:50.431 { 00:09:50.431 "dma_device_id": "system", 00:09:50.431 "dma_device_type": 1 00:09:50.431 }, 00:09:50.431 { 00:09:50.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.431 "dma_device_type": 2 00:09:50.431 } 00:09:50.431 ], 00:09:50.431 "driver_specific": {} 00:09:50.431 } 00:09:50.431 ] 00:09:50.431 04:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.431 04:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:50.431 04:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:50.431 04:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.431 04:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.431 04:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:50.431 04:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.431 04:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:50.431 04:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.431 04:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.431 04:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.431 04:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.431 04:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.431 04:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.431 04:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.431 04:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.431 04:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.431 04:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.431 "name": "Existed_Raid", 00:09:50.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.431 "strip_size_kb": 64, 00:09:50.431 "state": "configuring", 00:09:50.431 "raid_level": "raid0", 00:09:50.431 "superblock": false, 00:09:50.431 "num_base_bdevs": 4, 00:09:50.431 "num_base_bdevs_discovered": 3, 00:09:50.431 "num_base_bdevs_operational": 4, 00:09:50.431 "base_bdevs_list": [ 00:09:50.431 { 00:09:50.431 "name": "BaseBdev1", 00:09:50.431 "uuid": "7f18f536-0caf-43a5-8db2-7d9628b04ab4", 00:09:50.431 "is_configured": true, 00:09:50.431 "data_offset": 0, 00:09:50.431 "data_size": 65536 00:09:50.431 }, 00:09:50.431 { 00:09:50.431 "name": null, 00:09:50.431 "uuid": "8976553e-3ace-4efa-b8b3-5ee3d597aba6", 00:09:50.431 "is_configured": false, 00:09:50.431 "data_offset": 0, 00:09:50.431 "data_size": 65536 00:09:50.431 }, 00:09:50.431 { 00:09:50.431 "name": "BaseBdev3", 00:09:50.431 "uuid": "28f32da7-7360-4493-be7b-14e846d58bc9", 00:09:50.431 "is_configured": true, 00:09:50.431 "data_offset": 0, 00:09:50.431 "data_size": 65536 00:09:50.431 }, 00:09:50.431 { 00:09:50.431 "name": "BaseBdev4", 00:09:50.431 "uuid": "bd39784f-e29e-4e1f-919a-6d3a6fc9d6c5", 00:09:50.431 "is_configured": true, 00:09:50.431 "data_offset": 0, 00:09:50.431 "data_size": 65536 00:09:50.431 } 00:09:50.431 ] 00:09:50.431 }' 00:09:50.431 04:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.431 04:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.689 04:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.690 04:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:50.690 04:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.690 04:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.690 04:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.948 04:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:50.948 04:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:50.948 04:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.948 04:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.948 [2024-12-13 04:25:50.716802] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:50.948 04:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.948 04:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:50.948 04:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.948 04:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.948 04:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:50.948 04:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.948 04:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:50.948 04:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.948 04:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.948 04:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.948 04:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.948 04:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.948 04:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.948 04:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.948 04:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.948 04:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.948 04:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.948 "name": "Existed_Raid", 00:09:50.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.948 "strip_size_kb": 64, 00:09:50.948 "state": "configuring", 00:09:50.948 "raid_level": "raid0", 00:09:50.948 "superblock": false, 00:09:50.948 "num_base_bdevs": 4, 00:09:50.948 "num_base_bdevs_discovered": 2, 00:09:50.948 "num_base_bdevs_operational": 4, 00:09:50.948 "base_bdevs_list": [ 00:09:50.948 { 00:09:50.948 "name": "BaseBdev1", 00:09:50.948 "uuid": "7f18f536-0caf-43a5-8db2-7d9628b04ab4", 00:09:50.948 "is_configured": true, 00:09:50.948 "data_offset": 0, 00:09:50.948 "data_size": 65536 00:09:50.948 }, 00:09:50.948 { 00:09:50.948 "name": null, 00:09:50.948 "uuid": "8976553e-3ace-4efa-b8b3-5ee3d597aba6", 00:09:50.948 "is_configured": false, 00:09:50.948 "data_offset": 0, 00:09:50.948 "data_size": 65536 00:09:50.948 }, 00:09:50.948 { 00:09:50.948 "name": null, 00:09:50.948 "uuid": "28f32da7-7360-4493-be7b-14e846d58bc9", 00:09:50.948 "is_configured": false, 00:09:50.948 "data_offset": 0, 00:09:50.948 "data_size": 65536 00:09:50.948 }, 00:09:50.948 { 00:09:50.948 "name": "BaseBdev4", 00:09:50.948 "uuid": "bd39784f-e29e-4e1f-919a-6d3a6fc9d6c5", 00:09:50.948 "is_configured": true, 00:09:50.948 "data_offset": 0, 00:09:50.948 "data_size": 65536 00:09:50.948 } 00:09:50.948 ] 00:09:50.948 }' 00:09:50.948 04:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.948 04:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.208 04:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:51.208 04:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.208 04:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.208 04:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.208 04:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.208 04:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:51.208 04:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:51.208 04:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.208 04:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.208 [2024-12-13 04:25:51.216035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:51.208 04:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.208 04:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:51.208 04:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.208 04:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.208 04:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:51.468 04:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.468 04:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:51.468 04:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.468 04:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.468 04:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.468 04:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.468 04:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.468 04:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.468 04:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.468 04:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.468 04:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.468 04:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.468 "name": "Existed_Raid", 00:09:51.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.468 "strip_size_kb": 64, 00:09:51.468 "state": "configuring", 00:09:51.468 "raid_level": "raid0", 00:09:51.468 "superblock": false, 00:09:51.468 "num_base_bdevs": 4, 00:09:51.468 "num_base_bdevs_discovered": 3, 00:09:51.468 "num_base_bdevs_operational": 4, 00:09:51.468 "base_bdevs_list": [ 00:09:51.468 { 00:09:51.468 "name": "BaseBdev1", 00:09:51.468 "uuid": "7f18f536-0caf-43a5-8db2-7d9628b04ab4", 00:09:51.468 "is_configured": true, 00:09:51.468 "data_offset": 0, 00:09:51.468 "data_size": 65536 00:09:51.468 }, 00:09:51.468 { 00:09:51.468 "name": null, 00:09:51.468 "uuid": "8976553e-3ace-4efa-b8b3-5ee3d597aba6", 00:09:51.468 "is_configured": false, 00:09:51.468 "data_offset": 0, 00:09:51.468 "data_size": 65536 00:09:51.468 }, 00:09:51.468 { 00:09:51.468 "name": "BaseBdev3", 00:09:51.468 "uuid": "28f32da7-7360-4493-be7b-14e846d58bc9", 00:09:51.468 "is_configured": true, 00:09:51.468 "data_offset": 0, 00:09:51.468 "data_size": 65536 00:09:51.468 }, 00:09:51.468 { 00:09:51.468 "name": "BaseBdev4", 00:09:51.468 "uuid": "bd39784f-e29e-4e1f-919a-6d3a6fc9d6c5", 00:09:51.468 "is_configured": true, 00:09:51.468 "data_offset": 0, 00:09:51.468 "data_size": 65536 00:09:51.468 } 00:09:51.468 ] 00:09:51.468 }' 00:09:51.468 04:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.468 04:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.727 04:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:51.727 04:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.727 04:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.727 04:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.727 04:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.727 04:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:51.727 04:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:51.727 04:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.727 04:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.727 [2024-12-13 04:25:51.711252] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:51.727 04:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.727 04:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:51.727 04:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.727 04:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.727 04:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:51.727 04:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.727 04:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:51.727 04:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.727 04:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.727 04:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.727 04:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.727 04:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.728 04:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.728 04:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.728 04:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.986 04:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.986 04:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.986 "name": "Existed_Raid", 00:09:51.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.986 "strip_size_kb": 64, 00:09:51.986 "state": "configuring", 00:09:51.986 "raid_level": "raid0", 00:09:51.986 "superblock": false, 00:09:51.986 "num_base_bdevs": 4, 00:09:51.986 "num_base_bdevs_discovered": 2, 00:09:51.986 "num_base_bdevs_operational": 4, 00:09:51.986 "base_bdevs_list": [ 00:09:51.986 { 00:09:51.986 "name": null, 00:09:51.986 "uuid": "7f18f536-0caf-43a5-8db2-7d9628b04ab4", 00:09:51.986 "is_configured": false, 00:09:51.986 "data_offset": 0, 00:09:51.986 "data_size": 65536 00:09:51.986 }, 00:09:51.986 { 00:09:51.986 "name": null, 00:09:51.986 "uuid": "8976553e-3ace-4efa-b8b3-5ee3d597aba6", 00:09:51.986 "is_configured": false, 00:09:51.986 "data_offset": 0, 00:09:51.986 "data_size": 65536 00:09:51.986 }, 00:09:51.986 { 00:09:51.986 "name": "BaseBdev3", 00:09:51.986 "uuid": "28f32da7-7360-4493-be7b-14e846d58bc9", 00:09:51.986 "is_configured": true, 00:09:51.986 "data_offset": 0, 00:09:51.986 "data_size": 65536 00:09:51.986 }, 00:09:51.986 { 00:09:51.986 "name": "BaseBdev4", 00:09:51.986 "uuid": "bd39784f-e29e-4e1f-919a-6d3a6fc9d6c5", 00:09:51.986 "is_configured": true, 00:09:51.986 "data_offset": 0, 00:09:51.986 "data_size": 65536 00:09:51.986 } 00:09:51.986 ] 00:09:51.987 }' 00:09:51.987 04:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.987 04:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.246 04:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.246 04:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:52.246 04:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.246 04:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.246 04:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.246 04:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:52.246 04:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:52.246 04:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.246 04:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.246 [2024-12-13 04:25:52.166371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:52.246 04:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.246 04:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:52.246 04:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.246 04:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.246 04:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:52.246 04:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.246 04:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:52.246 04:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.246 04:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.246 04:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.246 04:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.246 04:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.246 04:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.246 04:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.246 04:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.246 04:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.246 04:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.246 "name": "Existed_Raid", 00:09:52.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.246 "strip_size_kb": 64, 00:09:52.246 "state": "configuring", 00:09:52.246 "raid_level": "raid0", 00:09:52.246 "superblock": false, 00:09:52.246 "num_base_bdevs": 4, 00:09:52.246 "num_base_bdevs_discovered": 3, 00:09:52.246 "num_base_bdevs_operational": 4, 00:09:52.246 "base_bdevs_list": [ 00:09:52.246 { 00:09:52.246 "name": null, 00:09:52.246 "uuid": "7f18f536-0caf-43a5-8db2-7d9628b04ab4", 00:09:52.246 "is_configured": false, 00:09:52.246 "data_offset": 0, 00:09:52.246 "data_size": 65536 00:09:52.246 }, 00:09:52.246 { 00:09:52.246 "name": "BaseBdev2", 00:09:52.246 "uuid": "8976553e-3ace-4efa-b8b3-5ee3d597aba6", 00:09:52.246 "is_configured": true, 00:09:52.246 "data_offset": 0, 00:09:52.246 "data_size": 65536 00:09:52.246 }, 00:09:52.246 { 00:09:52.246 "name": "BaseBdev3", 00:09:52.246 "uuid": "28f32da7-7360-4493-be7b-14e846d58bc9", 00:09:52.246 "is_configured": true, 00:09:52.246 "data_offset": 0, 00:09:52.246 "data_size": 65536 00:09:52.246 }, 00:09:52.246 { 00:09:52.246 "name": "BaseBdev4", 00:09:52.246 "uuid": "bd39784f-e29e-4e1f-919a-6d3a6fc9d6c5", 00:09:52.246 "is_configured": true, 00:09:52.246 "data_offset": 0, 00:09:52.246 "data_size": 65536 00:09:52.246 } 00:09:52.246 ] 00:09:52.246 }' 00:09:52.246 04:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.246 04:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.814 04:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.814 04:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:52.814 04:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.814 04:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.814 04:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.814 04:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:52.814 04:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:52.814 04:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.815 04:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.815 04:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.815 04:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.815 04:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7f18f536-0caf-43a5-8db2-7d9628b04ab4 00:09:52.815 04:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.815 04:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.815 [2024-12-13 04:25:52.658213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:52.815 [2024-12-13 04:25:52.658258] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:52.815 [2024-12-13 04:25:52.658266] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:52.815 [2024-12-13 04:25:52.658556] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:09:52.815 [2024-12-13 04:25:52.658684] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:52.815 [2024-12-13 04:25:52.658696] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:09:52.815 [2024-12-13 04:25:52.658899] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:52.815 NewBaseBdev 00:09:52.815 04:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.815 04:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:52.815 04:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:52.815 04:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:52.815 04:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:52.815 04:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:52.815 04:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:52.815 04:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:52.815 04:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.815 04:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.815 04:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.815 04:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:52.815 04:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.815 04:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.815 [ 00:09:52.815 { 00:09:52.815 "name": "NewBaseBdev", 00:09:52.815 "aliases": [ 00:09:52.815 "7f18f536-0caf-43a5-8db2-7d9628b04ab4" 00:09:52.815 ], 00:09:52.815 "product_name": "Malloc disk", 00:09:52.815 "block_size": 512, 00:09:52.815 "num_blocks": 65536, 00:09:52.815 "uuid": "7f18f536-0caf-43a5-8db2-7d9628b04ab4", 00:09:52.815 "assigned_rate_limits": { 00:09:52.815 "rw_ios_per_sec": 0, 00:09:52.815 "rw_mbytes_per_sec": 0, 00:09:52.815 "r_mbytes_per_sec": 0, 00:09:52.815 "w_mbytes_per_sec": 0 00:09:52.815 }, 00:09:52.815 "claimed": true, 00:09:52.815 "claim_type": "exclusive_write", 00:09:52.815 "zoned": false, 00:09:52.815 "supported_io_types": { 00:09:52.815 "read": true, 00:09:52.815 "write": true, 00:09:52.815 "unmap": true, 00:09:52.815 "flush": true, 00:09:52.815 "reset": true, 00:09:52.815 "nvme_admin": false, 00:09:52.815 "nvme_io": false, 00:09:52.815 "nvme_io_md": false, 00:09:52.815 "write_zeroes": true, 00:09:52.815 "zcopy": true, 00:09:52.815 "get_zone_info": false, 00:09:52.815 "zone_management": false, 00:09:52.815 "zone_append": false, 00:09:52.815 "compare": false, 00:09:52.815 "compare_and_write": false, 00:09:52.815 "abort": true, 00:09:52.815 "seek_hole": false, 00:09:52.815 "seek_data": false, 00:09:52.815 "copy": true, 00:09:52.815 "nvme_iov_md": false 00:09:52.815 }, 00:09:52.815 "memory_domains": [ 00:09:52.815 { 00:09:52.815 "dma_device_id": "system", 00:09:52.815 "dma_device_type": 1 00:09:52.815 }, 00:09:52.815 { 00:09:52.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.815 "dma_device_type": 2 00:09:52.815 } 00:09:52.815 ], 00:09:52.815 "driver_specific": {} 00:09:52.815 } 00:09:52.815 ] 00:09:52.815 04:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.815 04:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:52.815 04:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:52.815 04:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.815 04:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:52.815 04:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:52.815 04:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.815 04:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:52.815 04:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.815 04:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.815 04:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.815 04:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.815 04:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.815 04:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.815 04:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.815 04:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.815 04:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.815 04:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.815 "name": "Existed_Raid", 00:09:52.815 "uuid": "0210cf7c-7317-4ef1-a8e5-2bc6206287af", 00:09:52.815 "strip_size_kb": 64, 00:09:52.815 "state": "online", 00:09:52.815 "raid_level": "raid0", 00:09:52.815 "superblock": false, 00:09:52.815 "num_base_bdevs": 4, 00:09:52.815 "num_base_bdevs_discovered": 4, 00:09:52.815 "num_base_bdevs_operational": 4, 00:09:52.815 "base_bdevs_list": [ 00:09:52.815 { 00:09:52.815 "name": "NewBaseBdev", 00:09:52.815 "uuid": "7f18f536-0caf-43a5-8db2-7d9628b04ab4", 00:09:52.815 "is_configured": true, 00:09:52.815 "data_offset": 0, 00:09:52.815 "data_size": 65536 00:09:52.815 }, 00:09:52.815 { 00:09:52.815 "name": "BaseBdev2", 00:09:52.815 "uuid": "8976553e-3ace-4efa-b8b3-5ee3d597aba6", 00:09:52.815 "is_configured": true, 00:09:52.815 "data_offset": 0, 00:09:52.815 "data_size": 65536 00:09:52.815 }, 00:09:52.815 { 00:09:52.815 "name": "BaseBdev3", 00:09:52.815 "uuid": "28f32da7-7360-4493-be7b-14e846d58bc9", 00:09:52.815 "is_configured": true, 00:09:52.815 "data_offset": 0, 00:09:52.815 "data_size": 65536 00:09:52.815 }, 00:09:52.815 { 00:09:52.815 "name": "BaseBdev4", 00:09:52.815 "uuid": "bd39784f-e29e-4e1f-919a-6d3a6fc9d6c5", 00:09:52.815 "is_configured": true, 00:09:52.815 "data_offset": 0, 00:09:52.815 "data_size": 65536 00:09:52.815 } 00:09:52.815 ] 00:09:52.815 }' 00:09:52.815 04:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.815 04:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.384 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:53.384 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:53.384 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:53.384 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:53.384 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:53.384 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:53.384 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:53.384 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:53.384 04:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.384 04:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.384 [2024-12-13 04:25:53.137843] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:53.384 04:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.384 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:53.384 "name": "Existed_Raid", 00:09:53.384 "aliases": [ 00:09:53.384 "0210cf7c-7317-4ef1-a8e5-2bc6206287af" 00:09:53.384 ], 00:09:53.384 "product_name": "Raid Volume", 00:09:53.384 "block_size": 512, 00:09:53.384 "num_blocks": 262144, 00:09:53.384 "uuid": "0210cf7c-7317-4ef1-a8e5-2bc6206287af", 00:09:53.384 "assigned_rate_limits": { 00:09:53.384 "rw_ios_per_sec": 0, 00:09:53.384 "rw_mbytes_per_sec": 0, 00:09:53.384 "r_mbytes_per_sec": 0, 00:09:53.384 "w_mbytes_per_sec": 0 00:09:53.384 }, 00:09:53.384 "claimed": false, 00:09:53.384 "zoned": false, 00:09:53.384 "supported_io_types": { 00:09:53.384 "read": true, 00:09:53.384 "write": true, 00:09:53.384 "unmap": true, 00:09:53.384 "flush": true, 00:09:53.384 "reset": true, 00:09:53.384 "nvme_admin": false, 00:09:53.384 "nvme_io": false, 00:09:53.384 "nvme_io_md": false, 00:09:53.384 "write_zeroes": true, 00:09:53.384 "zcopy": false, 00:09:53.384 "get_zone_info": false, 00:09:53.384 "zone_management": false, 00:09:53.384 "zone_append": false, 00:09:53.384 "compare": false, 00:09:53.384 "compare_and_write": false, 00:09:53.384 "abort": false, 00:09:53.384 "seek_hole": false, 00:09:53.384 "seek_data": false, 00:09:53.384 "copy": false, 00:09:53.384 "nvme_iov_md": false 00:09:53.384 }, 00:09:53.384 "memory_domains": [ 00:09:53.384 { 00:09:53.384 "dma_device_id": "system", 00:09:53.384 "dma_device_type": 1 00:09:53.384 }, 00:09:53.384 { 00:09:53.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.384 "dma_device_type": 2 00:09:53.384 }, 00:09:53.384 { 00:09:53.384 "dma_device_id": "system", 00:09:53.384 "dma_device_type": 1 00:09:53.384 }, 00:09:53.384 { 00:09:53.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.384 "dma_device_type": 2 00:09:53.384 }, 00:09:53.384 { 00:09:53.384 "dma_device_id": "system", 00:09:53.384 "dma_device_type": 1 00:09:53.384 }, 00:09:53.384 { 00:09:53.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.384 "dma_device_type": 2 00:09:53.384 }, 00:09:53.384 { 00:09:53.384 "dma_device_id": "system", 00:09:53.384 "dma_device_type": 1 00:09:53.384 }, 00:09:53.384 { 00:09:53.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.384 "dma_device_type": 2 00:09:53.384 } 00:09:53.384 ], 00:09:53.384 "driver_specific": { 00:09:53.384 "raid": { 00:09:53.384 "uuid": "0210cf7c-7317-4ef1-a8e5-2bc6206287af", 00:09:53.384 "strip_size_kb": 64, 00:09:53.384 "state": "online", 00:09:53.384 "raid_level": "raid0", 00:09:53.384 "superblock": false, 00:09:53.384 "num_base_bdevs": 4, 00:09:53.384 "num_base_bdevs_discovered": 4, 00:09:53.384 "num_base_bdevs_operational": 4, 00:09:53.384 "base_bdevs_list": [ 00:09:53.384 { 00:09:53.384 "name": "NewBaseBdev", 00:09:53.384 "uuid": "7f18f536-0caf-43a5-8db2-7d9628b04ab4", 00:09:53.384 "is_configured": true, 00:09:53.384 "data_offset": 0, 00:09:53.384 "data_size": 65536 00:09:53.384 }, 00:09:53.384 { 00:09:53.384 "name": "BaseBdev2", 00:09:53.384 "uuid": "8976553e-3ace-4efa-b8b3-5ee3d597aba6", 00:09:53.384 "is_configured": true, 00:09:53.384 "data_offset": 0, 00:09:53.384 "data_size": 65536 00:09:53.384 }, 00:09:53.384 { 00:09:53.384 "name": "BaseBdev3", 00:09:53.384 "uuid": "28f32da7-7360-4493-be7b-14e846d58bc9", 00:09:53.384 "is_configured": true, 00:09:53.384 "data_offset": 0, 00:09:53.384 "data_size": 65536 00:09:53.384 }, 00:09:53.384 { 00:09:53.384 "name": "BaseBdev4", 00:09:53.384 "uuid": "bd39784f-e29e-4e1f-919a-6d3a6fc9d6c5", 00:09:53.384 "is_configured": true, 00:09:53.384 "data_offset": 0, 00:09:53.384 "data_size": 65536 00:09:53.384 } 00:09:53.384 ] 00:09:53.384 } 00:09:53.384 } 00:09:53.384 }' 00:09:53.384 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:53.384 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:53.384 BaseBdev2 00:09:53.384 BaseBdev3 00:09:53.384 BaseBdev4' 00:09:53.384 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.384 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:53.384 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.384 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.384 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:53.384 04:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.384 04:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.384 04:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.384 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.384 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.384 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.384 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:53.384 04:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.384 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.384 04:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.384 04:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.384 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.384 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.384 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.385 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.385 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:53.385 04:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.385 04:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.385 04:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.385 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.385 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.385 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.385 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.385 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:53.385 04:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.385 04:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.385 04:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.643 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.643 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.644 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:53.644 04:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.644 04:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.644 [2024-12-13 04:25:53.409043] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:53.644 [2024-12-13 04:25:53.409115] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:53.644 [2024-12-13 04:25:53.409217] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:53.644 [2024-12-13 04:25:53.409288] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:53.644 [2024-12-13 04:25:53.409298] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:09:53.644 04:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.644 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82032 00:09:53.644 04:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 82032 ']' 00:09:53.644 04:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 82032 00:09:53.644 04:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:53.644 04:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:53.644 04:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82032 00:09:53.644 04:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:53.644 killing process with pid 82032 00:09:53.644 04:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:53.644 04:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82032' 00:09:53.644 04:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 82032 00:09:53.644 [2024-12-13 04:25:53.461168] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:53.644 04:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 82032 00:09:53.644 [2024-12-13 04:25:53.539042] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:53.903 04:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:53.903 00:09:53.903 real 0m9.583s 00:09:53.903 user 0m16.112s 00:09:53.903 sys 0m2.057s 00:09:53.903 04:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:53.903 04:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.903 ************************************ 00:09:53.903 END TEST raid_state_function_test 00:09:53.903 ************************************ 00:09:53.903 04:25:53 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:09:53.903 04:25:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:54.162 04:25:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.162 04:25:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:54.162 ************************************ 00:09:54.162 START TEST raid_state_function_test_sb 00:09:54.162 ************************************ 00:09:54.162 04:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:09:54.162 04:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:54.162 04:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:54.162 04:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:54.162 04:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:54.162 04:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:54.162 04:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:54.162 04:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:54.162 04:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:54.162 04:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:54.162 04:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:54.162 04:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:54.162 04:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:54.162 04:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:54.162 04:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:54.162 04:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:54.162 04:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:54.162 04:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:54.162 04:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:54.162 04:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:54.162 04:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:54.162 04:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:54.162 04:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:54.162 04:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:54.162 04:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:54.162 04:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:54.162 04:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:54.162 04:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:54.162 04:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:54.162 04:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:54.162 04:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=82687 00:09:54.162 04:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:54.163 04:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82687' 00:09:54.163 Process raid pid: 82687 00:09:54.163 04:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 82687 00:09:54.163 04:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82687 ']' 00:09:54.163 04:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.163 04:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:54.163 04:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.163 04:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:54.163 04:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.163 [2024-12-13 04:25:54.028714] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:54.163 [2024-12-13 04:25:54.028912] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:54.422 [2024-12-13 04:25:54.184592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.422 [2024-12-13 04:25:54.222959] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.422 [2024-12-13 04:25:54.298691] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:54.422 [2024-12-13 04:25:54.298739] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:54.990 04:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:54.990 04:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:54.991 04:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:54.991 04:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.991 04:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.991 [2024-12-13 04:25:54.856639] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:54.991 [2024-12-13 04:25:54.856701] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:54.991 [2024-12-13 04:25:54.856711] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:54.991 [2024-12-13 04:25:54.856721] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:54.991 [2024-12-13 04:25:54.856727] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:54.991 [2024-12-13 04:25:54.856741] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:54.991 [2024-12-13 04:25:54.856746] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:54.991 [2024-12-13 04:25:54.856756] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:54.991 04:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.991 04:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:54.991 04:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.991 04:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.991 04:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:54.991 04:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.991 04:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:54.991 04:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.991 04:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.991 04:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.991 04:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.991 04:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.991 04:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.991 04:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.991 04:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.991 04:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.991 04:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.991 "name": "Existed_Raid", 00:09:54.991 "uuid": "74a0ade1-4460-4e33-878b-7528674f3d97", 00:09:54.991 "strip_size_kb": 64, 00:09:54.991 "state": "configuring", 00:09:54.991 "raid_level": "raid0", 00:09:54.991 "superblock": true, 00:09:54.991 "num_base_bdevs": 4, 00:09:54.991 "num_base_bdevs_discovered": 0, 00:09:54.991 "num_base_bdevs_operational": 4, 00:09:54.991 "base_bdevs_list": [ 00:09:54.991 { 00:09:54.991 "name": "BaseBdev1", 00:09:54.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.991 "is_configured": false, 00:09:54.991 "data_offset": 0, 00:09:54.991 "data_size": 0 00:09:54.991 }, 00:09:54.991 { 00:09:54.991 "name": "BaseBdev2", 00:09:54.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.991 "is_configured": false, 00:09:54.991 "data_offset": 0, 00:09:54.991 "data_size": 0 00:09:54.991 }, 00:09:54.991 { 00:09:54.991 "name": "BaseBdev3", 00:09:54.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.991 "is_configured": false, 00:09:54.991 "data_offset": 0, 00:09:54.991 "data_size": 0 00:09:54.991 }, 00:09:54.991 { 00:09:54.991 "name": "BaseBdev4", 00:09:54.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.991 "is_configured": false, 00:09:54.991 "data_offset": 0, 00:09:54.991 "data_size": 0 00:09:54.991 } 00:09:54.991 ] 00:09:54.991 }' 00:09:54.991 04:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.991 04:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.566 04:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:55.566 04:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.566 04:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.566 [2024-12-13 04:25:55.323724] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:55.566 [2024-12-13 04:25:55.323819] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:09:55.566 04:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.566 04:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:55.566 04:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.566 04:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.566 [2024-12-13 04:25:55.335707] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:55.566 [2024-12-13 04:25:55.335785] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:55.566 [2024-12-13 04:25:55.335812] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:55.566 [2024-12-13 04:25:55.335834] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:55.566 [2024-12-13 04:25:55.335851] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:55.566 [2024-12-13 04:25:55.335871] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:55.566 [2024-12-13 04:25:55.335888] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:55.566 [2024-12-13 04:25:55.335924] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:55.566 04:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.566 04:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:55.566 04:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.566 04:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.566 [2024-12-13 04:25:55.362745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:55.566 BaseBdev1 00:09:55.566 04:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.566 04:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:55.566 04:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:55.566 04:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:55.566 04:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:55.566 04:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:55.566 04:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:55.566 04:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:55.566 04:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.566 04:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.566 04:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.566 04:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:55.566 04:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.566 04:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.566 [ 00:09:55.566 { 00:09:55.566 "name": "BaseBdev1", 00:09:55.566 "aliases": [ 00:09:55.566 "a94a748a-541c-4652-8397-0ae904dbb536" 00:09:55.566 ], 00:09:55.566 "product_name": "Malloc disk", 00:09:55.566 "block_size": 512, 00:09:55.566 "num_blocks": 65536, 00:09:55.566 "uuid": "a94a748a-541c-4652-8397-0ae904dbb536", 00:09:55.566 "assigned_rate_limits": { 00:09:55.566 "rw_ios_per_sec": 0, 00:09:55.566 "rw_mbytes_per_sec": 0, 00:09:55.566 "r_mbytes_per_sec": 0, 00:09:55.566 "w_mbytes_per_sec": 0 00:09:55.566 }, 00:09:55.566 "claimed": true, 00:09:55.566 "claim_type": "exclusive_write", 00:09:55.566 "zoned": false, 00:09:55.566 "supported_io_types": { 00:09:55.566 "read": true, 00:09:55.566 "write": true, 00:09:55.566 "unmap": true, 00:09:55.566 "flush": true, 00:09:55.566 "reset": true, 00:09:55.566 "nvme_admin": false, 00:09:55.566 "nvme_io": false, 00:09:55.566 "nvme_io_md": false, 00:09:55.566 "write_zeroes": true, 00:09:55.566 "zcopy": true, 00:09:55.566 "get_zone_info": false, 00:09:55.566 "zone_management": false, 00:09:55.566 "zone_append": false, 00:09:55.566 "compare": false, 00:09:55.566 "compare_and_write": false, 00:09:55.566 "abort": true, 00:09:55.566 "seek_hole": false, 00:09:55.566 "seek_data": false, 00:09:55.566 "copy": true, 00:09:55.566 "nvme_iov_md": false 00:09:55.566 }, 00:09:55.566 "memory_domains": [ 00:09:55.566 { 00:09:55.566 "dma_device_id": "system", 00:09:55.566 "dma_device_type": 1 00:09:55.566 }, 00:09:55.566 { 00:09:55.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.566 "dma_device_type": 2 00:09:55.566 } 00:09:55.566 ], 00:09:55.566 "driver_specific": {} 00:09:55.566 } 00:09:55.566 ] 00:09:55.566 04:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.566 04:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:55.566 04:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:55.566 04:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.566 04:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.566 04:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:55.566 04:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.566 04:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:55.566 04:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.566 04:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.566 04:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.566 04:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.566 04:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.566 04:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.566 04:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.566 04:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.566 04:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.566 04:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.566 "name": "Existed_Raid", 00:09:55.566 "uuid": "9074b62d-5760-41c3-a81c-af253ca768b7", 00:09:55.566 "strip_size_kb": 64, 00:09:55.566 "state": "configuring", 00:09:55.566 "raid_level": "raid0", 00:09:55.566 "superblock": true, 00:09:55.566 "num_base_bdevs": 4, 00:09:55.566 "num_base_bdevs_discovered": 1, 00:09:55.566 "num_base_bdevs_operational": 4, 00:09:55.566 "base_bdevs_list": [ 00:09:55.566 { 00:09:55.566 "name": "BaseBdev1", 00:09:55.566 "uuid": "a94a748a-541c-4652-8397-0ae904dbb536", 00:09:55.566 "is_configured": true, 00:09:55.566 "data_offset": 2048, 00:09:55.566 "data_size": 63488 00:09:55.566 }, 00:09:55.566 { 00:09:55.566 "name": "BaseBdev2", 00:09:55.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.566 "is_configured": false, 00:09:55.566 "data_offset": 0, 00:09:55.566 "data_size": 0 00:09:55.566 }, 00:09:55.566 { 00:09:55.566 "name": "BaseBdev3", 00:09:55.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.566 "is_configured": false, 00:09:55.566 "data_offset": 0, 00:09:55.566 "data_size": 0 00:09:55.566 }, 00:09:55.566 { 00:09:55.566 "name": "BaseBdev4", 00:09:55.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.566 "is_configured": false, 00:09:55.566 "data_offset": 0, 00:09:55.566 "data_size": 0 00:09:55.566 } 00:09:55.566 ] 00:09:55.566 }' 00:09:55.566 04:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.566 04:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.826 04:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:55.826 04:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.826 04:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.826 [2024-12-13 04:25:55.809983] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:55.826 [2024-12-13 04:25:55.810023] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:09:55.826 04:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.826 04:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:55.826 04:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.826 04:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.826 [2024-12-13 04:25:55.822003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:55.826 [2024-12-13 04:25:55.824153] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:55.826 [2024-12-13 04:25:55.824191] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:55.826 [2024-12-13 04:25:55.824200] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:55.826 [2024-12-13 04:25:55.824208] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:55.826 [2024-12-13 04:25:55.824214] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:55.826 [2024-12-13 04:25:55.824221] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:55.826 04:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.826 04:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:55.826 04:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:55.826 04:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:55.826 04:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.826 04:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.826 04:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:55.826 04:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.826 04:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:55.826 04:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.826 04:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.826 04:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.826 04:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.826 04:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.826 04:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.826 04:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.826 04:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.086 04:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.086 04:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.086 "name": "Existed_Raid", 00:09:56.086 "uuid": "3a2779c0-b513-4ace-9184-02962ee43327", 00:09:56.086 "strip_size_kb": 64, 00:09:56.086 "state": "configuring", 00:09:56.086 "raid_level": "raid0", 00:09:56.086 "superblock": true, 00:09:56.086 "num_base_bdevs": 4, 00:09:56.086 "num_base_bdevs_discovered": 1, 00:09:56.086 "num_base_bdevs_operational": 4, 00:09:56.086 "base_bdevs_list": [ 00:09:56.086 { 00:09:56.086 "name": "BaseBdev1", 00:09:56.086 "uuid": "a94a748a-541c-4652-8397-0ae904dbb536", 00:09:56.086 "is_configured": true, 00:09:56.086 "data_offset": 2048, 00:09:56.086 "data_size": 63488 00:09:56.086 }, 00:09:56.086 { 00:09:56.086 "name": "BaseBdev2", 00:09:56.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.086 "is_configured": false, 00:09:56.086 "data_offset": 0, 00:09:56.086 "data_size": 0 00:09:56.086 }, 00:09:56.086 { 00:09:56.086 "name": "BaseBdev3", 00:09:56.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.086 "is_configured": false, 00:09:56.086 "data_offset": 0, 00:09:56.086 "data_size": 0 00:09:56.086 }, 00:09:56.086 { 00:09:56.086 "name": "BaseBdev4", 00:09:56.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.086 "is_configured": false, 00:09:56.086 "data_offset": 0, 00:09:56.086 "data_size": 0 00:09:56.086 } 00:09:56.086 ] 00:09:56.086 }' 00:09:56.086 04:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.086 04:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.345 04:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:56.345 04:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.345 04:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.345 [2024-12-13 04:25:56.293922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:56.345 BaseBdev2 00:09:56.345 04:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.345 04:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:56.345 04:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:56.346 04:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:56.346 04:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:56.346 04:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:56.346 04:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:56.346 04:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:56.346 04:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.346 04:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.346 04:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.346 04:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:56.346 04:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.346 04:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.346 [ 00:09:56.346 { 00:09:56.346 "name": "BaseBdev2", 00:09:56.346 "aliases": [ 00:09:56.346 "78e22b15-07c8-4561-8023-c642fdeb95a1" 00:09:56.346 ], 00:09:56.346 "product_name": "Malloc disk", 00:09:56.346 "block_size": 512, 00:09:56.346 "num_blocks": 65536, 00:09:56.346 "uuid": "78e22b15-07c8-4561-8023-c642fdeb95a1", 00:09:56.346 "assigned_rate_limits": { 00:09:56.346 "rw_ios_per_sec": 0, 00:09:56.346 "rw_mbytes_per_sec": 0, 00:09:56.346 "r_mbytes_per_sec": 0, 00:09:56.346 "w_mbytes_per_sec": 0 00:09:56.346 }, 00:09:56.346 "claimed": true, 00:09:56.346 "claim_type": "exclusive_write", 00:09:56.346 "zoned": false, 00:09:56.346 "supported_io_types": { 00:09:56.346 "read": true, 00:09:56.346 "write": true, 00:09:56.346 "unmap": true, 00:09:56.346 "flush": true, 00:09:56.346 "reset": true, 00:09:56.346 "nvme_admin": false, 00:09:56.346 "nvme_io": false, 00:09:56.346 "nvme_io_md": false, 00:09:56.346 "write_zeroes": true, 00:09:56.346 "zcopy": true, 00:09:56.346 "get_zone_info": false, 00:09:56.346 "zone_management": false, 00:09:56.346 "zone_append": false, 00:09:56.346 "compare": false, 00:09:56.346 "compare_and_write": false, 00:09:56.346 "abort": true, 00:09:56.346 "seek_hole": false, 00:09:56.346 "seek_data": false, 00:09:56.346 "copy": true, 00:09:56.346 "nvme_iov_md": false 00:09:56.346 }, 00:09:56.346 "memory_domains": [ 00:09:56.346 { 00:09:56.346 "dma_device_id": "system", 00:09:56.346 "dma_device_type": 1 00:09:56.346 }, 00:09:56.346 { 00:09:56.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.346 "dma_device_type": 2 00:09:56.346 } 00:09:56.346 ], 00:09:56.346 "driver_specific": {} 00:09:56.346 } 00:09:56.346 ] 00:09:56.346 04:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.346 04:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:56.346 04:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:56.346 04:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:56.346 04:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:56.346 04:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.346 04:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.346 04:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:56.346 04:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.346 04:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:56.346 04:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.346 04:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.346 04:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.346 04:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.346 04:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.346 04:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.346 04:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.346 04:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.605 04:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.605 04:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.605 "name": "Existed_Raid", 00:09:56.605 "uuid": "3a2779c0-b513-4ace-9184-02962ee43327", 00:09:56.605 "strip_size_kb": 64, 00:09:56.605 "state": "configuring", 00:09:56.605 "raid_level": "raid0", 00:09:56.605 "superblock": true, 00:09:56.605 "num_base_bdevs": 4, 00:09:56.605 "num_base_bdevs_discovered": 2, 00:09:56.605 "num_base_bdevs_operational": 4, 00:09:56.605 "base_bdevs_list": [ 00:09:56.605 { 00:09:56.605 "name": "BaseBdev1", 00:09:56.605 "uuid": "a94a748a-541c-4652-8397-0ae904dbb536", 00:09:56.605 "is_configured": true, 00:09:56.605 "data_offset": 2048, 00:09:56.605 "data_size": 63488 00:09:56.605 }, 00:09:56.605 { 00:09:56.605 "name": "BaseBdev2", 00:09:56.605 "uuid": "78e22b15-07c8-4561-8023-c642fdeb95a1", 00:09:56.605 "is_configured": true, 00:09:56.605 "data_offset": 2048, 00:09:56.605 "data_size": 63488 00:09:56.606 }, 00:09:56.606 { 00:09:56.606 "name": "BaseBdev3", 00:09:56.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.606 "is_configured": false, 00:09:56.606 "data_offset": 0, 00:09:56.606 "data_size": 0 00:09:56.606 }, 00:09:56.606 { 00:09:56.606 "name": "BaseBdev4", 00:09:56.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.606 "is_configured": false, 00:09:56.606 "data_offset": 0, 00:09:56.606 "data_size": 0 00:09:56.606 } 00:09:56.606 ] 00:09:56.606 }' 00:09:56.606 04:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.606 04:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.865 04:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:56.865 04:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.865 04:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.866 [2024-12-13 04:25:56.748959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:56.866 BaseBdev3 00:09:56.866 04:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.866 04:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:56.866 04:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:56.866 04:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:56.866 04:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:56.866 04:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:56.866 04:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:56.866 04:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:56.866 04:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.866 04:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.866 04:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.866 04:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:56.866 04:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.866 04:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.866 [ 00:09:56.866 { 00:09:56.866 "name": "BaseBdev3", 00:09:56.866 "aliases": [ 00:09:56.866 "517f2368-8fba-445f-8706-983037df8e29" 00:09:56.866 ], 00:09:56.866 "product_name": "Malloc disk", 00:09:56.866 "block_size": 512, 00:09:56.866 "num_blocks": 65536, 00:09:56.866 "uuid": "517f2368-8fba-445f-8706-983037df8e29", 00:09:56.866 "assigned_rate_limits": { 00:09:56.866 "rw_ios_per_sec": 0, 00:09:56.866 "rw_mbytes_per_sec": 0, 00:09:56.866 "r_mbytes_per_sec": 0, 00:09:56.866 "w_mbytes_per_sec": 0 00:09:56.866 }, 00:09:56.866 "claimed": true, 00:09:56.866 "claim_type": "exclusive_write", 00:09:56.866 "zoned": false, 00:09:56.866 "supported_io_types": { 00:09:56.866 "read": true, 00:09:56.866 "write": true, 00:09:56.866 "unmap": true, 00:09:56.866 "flush": true, 00:09:56.866 "reset": true, 00:09:56.866 "nvme_admin": false, 00:09:56.866 "nvme_io": false, 00:09:56.866 "nvme_io_md": false, 00:09:56.866 "write_zeroes": true, 00:09:56.866 "zcopy": true, 00:09:56.866 "get_zone_info": false, 00:09:56.866 "zone_management": false, 00:09:56.866 "zone_append": false, 00:09:56.866 "compare": false, 00:09:56.866 "compare_and_write": false, 00:09:56.866 "abort": true, 00:09:56.866 "seek_hole": false, 00:09:56.866 "seek_data": false, 00:09:56.866 "copy": true, 00:09:56.866 "nvme_iov_md": false 00:09:56.866 }, 00:09:56.866 "memory_domains": [ 00:09:56.866 { 00:09:56.866 "dma_device_id": "system", 00:09:56.866 "dma_device_type": 1 00:09:56.866 }, 00:09:56.866 { 00:09:56.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.866 "dma_device_type": 2 00:09:56.866 } 00:09:56.866 ], 00:09:56.866 "driver_specific": {} 00:09:56.866 } 00:09:56.866 ] 00:09:56.866 04:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.866 04:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:56.866 04:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:56.866 04:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:56.866 04:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:56.866 04:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.866 04:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.866 04:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:56.866 04:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.866 04:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:56.866 04:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.866 04:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.866 04:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.866 04:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.866 04:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.866 04:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.866 04:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.866 04:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.866 04:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.866 04:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.866 "name": "Existed_Raid", 00:09:56.866 "uuid": "3a2779c0-b513-4ace-9184-02962ee43327", 00:09:56.866 "strip_size_kb": 64, 00:09:56.866 "state": "configuring", 00:09:56.866 "raid_level": "raid0", 00:09:56.866 "superblock": true, 00:09:56.866 "num_base_bdevs": 4, 00:09:56.866 "num_base_bdevs_discovered": 3, 00:09:56.866 "num_base_bdevs_operational": 4, 00:09:56.866 "base_bdevs_list": [ 00:09:56.866 { 00:09:56.866 "name": "BaseBdev1", 00:09:56.866 "uuid": "a94a748a-541c-4652-8397-0ae904dbb536", 00:09:56.866 "is_configured": true, 00:09:56.866 "data_offset": 2048, 00:09:56.866 "data_size": 63488 00:09:56.866 }, 00:09:56.866 { 00:09:56.866 "name": "BaseBdev2", 00:09:56.866 "uuid": "78e22b15-07c8-4561-8023-c642fdeb95a1", 00:09:56.866 "is_configured": true, 00:09:56.866 "data_offset": 2048, 00:09:56.866 "data_size": 63488 00:09:56.866 }, 00:09:56.866 { 00:09:56.866 "name": "BaseBdev3", 00:09:56.866 "uuid": "517f2368-8fba-445f-8706-983037df8e29", 00:09:56.866 "is_configured": true, 00:09:56.866 "data_offset": 2048, 00:09:56.866 "data_size": 63488 00:09:56.866 }, 00:09:56.866 { 00:09:56.866 "name": "BaseBdev4", 00:09:56.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.866 "is_configured": false, 00:09:56.866 "data_offset": 0, 00:09:56.866 "data_size": 0 00:09:56.866 } 00:09:56.866 ] 00:09:56.866 }' 00:09:56.866 04:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.866 04:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.435 04:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:57.435 04:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.435 04:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.435 [2024-12-13 04:25:57.256660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:57.435 [2024-12-13 04:25:57.256978] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:57.435 [2024-12-13 04:25:57.256998] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:57.435 [2024-12-13 04:25:57.257324] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:57.435 BaseBdev4 00:09:57.435 [2024-12-13 04:25:57.257507] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:57.435 [2024-12-13 04:25:57.257527] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:09:57.435 [2024-12-13 04:25:57.257653] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:57.435 04:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.435 04:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:57.435 04:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:57.435 04:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:57.435 04:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:57.435 04:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:57.435 04:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:57.435 04:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:57.435 04:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.435 04:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.435 04:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.435 04:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:57.435 04:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.435 04:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.435 [ 00:09:57.435 { 00:09:57.435 "name": "BaseBdev4", 00:09:57.435 "aliases": [ 00:09:57.435 "db99eaac-9178-4eb2-90f5-413425c3efa2" 00:09:57.435 ], 00:09:57.435 "product_name": "Malloc disk", 00:09:57.435 "block_size": 512, 00:09:57.435 "num_blocks": 65536, 00:09:57.435 "uuid": "db99eaac-9178-4eb2-90f5-413425c3efa2", 00:09:57.435 "assigned_rate_limits": { 00:09:57.435 "rw_ios_per_sec": 0, 00:09:57.435 "rw_mbytes_per_sec": 0, 00:09:57.435 "r_mbytes_per_sec": 0, 00:09:57.435 "w_mbytes_per_sec": 0 00:09:57.435 }, 00:09:57.435 "claimed": true, 00:09:57.435 "claim_type": "exclusive_write", 00:09:57.435 "zoned": false, 00:09:57.435 "supported_io_types": { 00:09:57.435 "read": true, 00:09:57.435 "write": true, 00:09:57.435 "unmap": true, 00:09:57.435 "flush": true, 00:09:57.435 "reset": true, 00:09:57.435 "nvme_admin": false, 00:09:57.435 "nvme_io": false, 00:09:57.435 "nvme_io_md": false, 00:09:57.435 "write_zeroes": true, 00:09:57.436 "zcopy": true, 00:09:57.436 "get_zone_info": false, 00:09:57.436 "zone_management": false, 00:09:57.436 "zone_append": false, 00:09:57.436 "compare": false, 00:09:57.436 "compare_and_write": false, 00:09:57.436 "abort": true, 00:09:57.436 "seek_hole": false, 00:09:57.436 "seek_data": false, 00:09:57.436 "copy": true, 00:09:57.436 "nvme_iov_md": false 00:09:57.436 }, 00:09:57.436 "memory_domains": [ 00:09:57.436 { 00:09:57.436 "dma_device_id": "system", 00:09:57.436 "dma_device_type": 1 00:09:57.436 }, 00:09:57.436 { 00:09:57.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.436 "dma_device_type": 2 00:09:57.436 } 00:09:57.436 ], 00:09:57.436 "driver_specific": {} 00:09:57.436 } 00:09:57.436 ] 00:09:57.436 04:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.436 04:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:57.436 04:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:57.436 04:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:57.436 04:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:57.436 04:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.436 04:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:57.436 04:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:57.436 04:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.436 04:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:57.436 04:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.436 04:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.436 04:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.436 04:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.436 04:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.436 04:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.436 04:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.436 04:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.436 04:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.436 04:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.436 "name": "Existed_Raid", 00:09:57.436 "uuid": "3a2779c0-b513-4ace-9184-02962ee43327", 00:09:57.436 "strip_size_kb": 64, 00:09:57.436 "state": "online", 00:09:57.436 "raid_level": "raid0", 00:09:57.436 "superblock": true, 00:09:57.436 "num_base_bdevs": 4, 00:09:57.436 "num_base_bdevs_discovered": 4, 00:09:57.436 "num_base_bdevs_operational": 4, 00:09:57.436 "base_bdevs_list": [ 00:09:57.436 { 00:09:57.436 "name": "BaseBdev1", 00:09:57.436 "uuid": "a94a748a-541c-4652-8397-0ae904dbb536", 00:09:57.436 "is_configured": true, 00:09:57.436 "data_offset": 2048, 00:09:57.436 "data_size": 63488 00:09:57.436 }, 00:09:57.436 { 00:09:57.436 "name": "BaseBdev2", 00:09:57.436 "uuid": "78e22b15-07c8-4561-8023-c642fdeb95a1", 00:09:57.436 "is_configured": true, 00:09:57.436 "data_offset": 2048, 00:09:57.436 "data_size": 63488 00:09:57.436 }, 00:09:57.436 { 00:09:57.436 "name": "BaseBdev3", 00:09:57.436 "uuid": "517f2368-8fba-445f-8706-983037df8e29", 00:09:57.436 "is_configured": true, 00:09:57.436 "data_offset": 2048, 00:09:57.436 "data_size": 63488 00:09:57.436 }, 00:09:57.436 { 00:09:57.436 "name": "BaseBdev4", 00:09:57.436 "uuid": "db99eaac-9178-4eb2-90f5-413425c3efa2", 00:09:57.436 "is_configured": true, 00:09:57.436 "data_offset": 2048, 00:09:57.436 "data_size": 63488 00:09:57.436 } 00:09:57.436 ] 00:09:57.436 }' 00:09:57.436 04:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.436 04:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.694 04:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:57.694 04:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:57.694 04:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:57.694 04:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:57.694 04:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:57.694 04:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:57.694 04:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:57.694 04:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:57.694 04:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.954 04:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.954 [2024-12-13 04:25:57.716424] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:57.954 04:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.954 04:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:57.954 "name": "Existed_Raid", 00:09:57.954 "aliases": [ 00:09:57.954 "3a2779c0-b513-4ace-9184-02962ee43327" 00:09:57.954 ], 00:09:57.954 "product_name": "Raid Volume", 00:09:57.954 "block_size": 512, 00:09:57.954 "num_blocks": 253952, 00:09:57.954 "uuid": "3a2779c0-b513-4ace-9184-02962ee43327", 00:09:57.954 "assigned_rate_limits": { 00:09:57.954 "rw_ios_per_sec": 0, 00:09:57.954 "rw_mbytes_per_sec": 0, 00:09:57.954 "r_mbytes_per_sec": 0, 00:09:57.954 "w_mbytes_per_sec": 0 00:09:57.954 }, 00:09:57.954 "claimed": false, 00:09:57.954 "zoned": false, 00:09:57.954 "supported_io_types": { 00:09:57.954 "read": true, 00:09:57.954 "write": true, 00:09:57.954 "unmap": true, 00:09:57.954 "flush": true, 00:09:57.954 "reset": true, 00:09:57.954 "nvme_admin": false, 00:09:57.954 "nvme_io": false, 00:09:57.954 "nvme_io_md": false, 00:09:57.954 "write_zeroes": true, 00:09:57.954 "zcopy": false, 00:09:57.954 "get_zone_info": false, 00:09:57.954 "zone_management": false, 00:09:57.954 "zone_append": false, 00:09:57.954 "compare": false, 00:09:57.954 "compare_and_write": false, 00:09:57.954 "abort": false, 00:09:57.954 "seek_hole": false, 00:09:57.954 "seek_data": false, 00:09:57.954 "copy": false, 00:09:57.954 "nvme_iov_md": false 00:09:57.954 }, 00:09:57.954 "memory_domains": [ 00:09:57.954 { 00:09:57.954 "dma_device_id": "system", 00:09:57.954 "dma_device_type": 1 00:09:57.954 }, 00:09:57.954 { 00:09:57.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.954 "dma_device_type": 2 00:09:57.954 }, 00:09:57.954 { 00:09:57.954 "dma_device_id": "system", 00:09:57.954 "dma_device_type": 1 00:09:57.954 }, 00:09:57.954 { 00:09:57.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.954 "dma_device_type": 2 00:09:57.954 }, 00:09:57.954 { 00:09:57.954 "dma_device_id": "system", 00:09:57.954 "dma_device_type": 1 00:09:57.954 }, 00:09:57.954 { 00:09:57.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.954 "dma_device_type": 2 00:09:57.954 }, 00:09:57.954 { 00:09:57.954 "dma_device_id": "system", 00:09:57.954 "dma_device_type": 1 00:09:57.954 }, 00:09:57.954 { 00:09:57.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.954 "dma_device_type": 2 00:09:57.954 } 00:09:57.954 ], 00:09:57.954 "driver_specific": { 00:09:57.954 "raid": { 00:09:57.954 "uuid": "3a2779c0-b513-4ace-9184-02962ee43327", 00:09:57.954 "strip_size_kb": 64, 00:09:57.954 "state": "online", 00:09:57.954 "raid_level": "raid0", 00:09:57.954 "superblock": true, 00:09:57.954 "num_base_bdevs": 4, 00:09:57.954 "num_base_bdevs_discovered": 4, 00:09:57.954 "num_base_bdevs_operational": 4, 00:09:57.954 "base_bdevs_list": [ 00:09:57.954 { 00:09:57.954 "name": "BaseBdev1", 00:09:57.954 "uuid": "a94a748a-541c-4652-8397-0ae904dbb536", 00:09:57.954 "is_configured": true, 00:09:57.954 "data_offset": 2048, 00:09:57.954 "data_size": 63488 00:09:57.954 }, 00:09:57.954 { 00:09:57.954 "name": "BaseBdev2", 00:09:57.954 "uuid": "78e22b15-07c8-4561-8023-c642fdeb95a1", 00:09:57.954 "is_configured": true, 00:09:57.954 "data_offset": 2048, 00:09:57.954 "data_size": 63488 00:09:57.954 }, 00:09:57.954 { 00:09:57.954 "name": "BaseBdev3", 00:09:57.954 "uuid": "517f2368-8fba-445f-8706-983037df8e29", 00:09:57.954 "is_configured": true, 00:09:57.954 "data_offset": 2048, 00:09:57.954 "data_size": 63488 00:09:57.954 }, 00:09:57.954 { 00:09:57.954 "name": "BaseBdev4", 00:09:57.954 "uuid": "db99eaac-9178-4eb2-90f5-413425c3efa2", 00:09:57.954 "is_configured": true, 00:09:57.954 "data_offset": 2048, 00:09:57.954 "data_size": 63488 00:09:57.954 } 00:09:57.954 ] 00:09:57.954 } 00:09:57.954 } 00:09:57.954 }' 00:09:57.954 04:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:57.954 04:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:57.954 BaseBdev2 00:09:57.954 BaseBdev3 00:09:57.954 BaseBdev4' 00:09:57.954 04:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.954 04:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:57.954 04:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.954 04:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:57.954 04:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.954 04:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.954 04:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.954 04:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.954 04:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:57.954 04:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:57.954 04:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.954 04:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:57.954 04:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.954 04:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.954 04:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.954 04:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.954 04:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:57.954 04:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:57.954 04:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.954 04:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:57.954 04:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.954 04:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.954 04:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.214 04:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.214 04:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:58.214 04:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:58.214 04:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:58.214 04:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:58.214 04:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.214 04:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.214 04:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.214 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.214 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:58.214 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:58.214 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:58.214 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.214 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.214 [2024-12-13 04:25:58.051591] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:58.214 [2024-12-13 04:25:58.051620] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:58.214 [2024-12-13 04:25:58.051672] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:58.214 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.214 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:58.214 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:58.214 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:58.214 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:58.214 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:58.214 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:58.214 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.214 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:58.214 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:58.214 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.214 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.214 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.214 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.214 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.214 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.214 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.214 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.214 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.214 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.214 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.214 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.214 "name": "Existed_Raid", 00:09:58.214 "uuid": "3a2779c0-b513-4ace-9184-02962ee43327", 00:09:58.214 "strip_size_kb": 64, 00:09:58.214 "state": "offline", 00:09:58.214 "raid_level": "raid0", 00:09:58.214 "superblock": true, 00:09:58.214 "num_base_bdevs": 4, 00:09:58.214 "num_base_bdevs_discovered": 3, 00:09:58.214 "num_base_bdevs_operational": 3, 00:09:58.214 "base_bdevs_list": [ 00:09:58.214 { 00:09:58.214 "name": null, 00:09:58.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.214 "is_configured": false, 00:09:58.215 "data_offset": 0, 00:09:58.215 "data_size": 63488 00:09:58.215 }, 00:09:58.215 { 00:09:58.215 "name": "BaseBdev2", 00:09:58.215 "uuid": "78e22b15-07c8-4561-8023-c642fdeb95a1", 00:09:58.215 "is_configured": true, 00:09:58.215 "data_offset": 2048, 00:09:58.215 "data_size": 63488 00:09:58.215 }, 00:09:58.215 { 00:09:58.215 "name": "BaseBdev3", 00:09:58.215 "uuid": "517f2368-8fba-445f-8706-983037df8e29", 00:09:58.215 "is_configured": true, 00:09:58.215 "data_offset": 2048, 00:09:58.215 "data_size": 63488 00:09:58.215 }, 00:09:58.215 { 00:09:58.215 "name": "BaseBdev4", 00:09:58.215 "uuid": "db99eaac-9178-4eb2-90f5-413425c3efa2", 00:09:58.215 "is_configured": true, 00:09:58.215 "data_offset": 2048, 00:09:58.215 "data_size": 63488 00:09:58.215 } 00:09:58.215 ] 00:09:58.215 }' 00:09:58.215 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.215 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.784 [2024-12-13 04:25:58.547372] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.784 [2024-12-13 04:25:58.611925] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.784 [2024-12-13 04:25:58.688136] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:58.784 [2024-12-13 04:25:58.688235] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.784 BaseBdev2 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.784 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.045 [ 00:09:59.045 { 00:09:59.045 "name": "BaseBdev2", 00:09:59.045 "aliases": [ 00:09:59.045 "bab8f52b-ab5b-4f43-b30a-a7fb6ce11480" 00:09:59.045 ], 00:09:59.045 "product_name": "Malloc disk", 00:09:59.045 "block_size": 512, 00:09:59.045 "num_blocks": 65536, 00:09:59.045 "uuid": "bab8f52b-ab5b-4f43-b30a-a7fb6ce11480", 00:09:59.045 "assigned_rate_limits": { 00:09:59.045 "rw_ios_per_sec": 0, 00:09:59.045 "rw_mbytes_per_sec": 0, 00:09:59.045 "r_mbytes_per_sec": 0, 00:09:59.045 "w_mbytes_per_sec": 0 00:09:59.045 }, 00:09:59.045 "claimed": false, 00:09:59.045 "zoned": false, 00:09:59.045 "supported_io_types": { 00:09:59.045 "read": true, 00:09:59.045 "write": true, 00:09:59.045 "unmap": true, 00:09:59.045 "flush": true, 00:09:59.045 "reset": true, 00:09:59.045 "nvme_admin": false, 00:09:59.045 "nvme_io": false, 00:09:59.045 "nvme_io_md": false, 00:09:59.045 "write_zeroes": true, 00:09:59.045 "zcopy": true, 00:09:59.045 "get_zone_info": false, 00:09:59.045 "zone_management": false, 00:09:59.045 "zone_append": false, 00:09:59.045 "compare": false, 00:09:59.045 "compare_and_write": false, 00:09:59.045 "abort": true, 00:09:59.045 "seek_hole": false, 00:09:59.045 "seek_data": false, 00:09:59.045 "copy": true, 00:09:59.045 "nvme_iov_md": false 00:09:59.045 }, 00:09:59.045 "memory_domains": [ 00:09:59.045 { 00:09:59.045 "dma_device_id": "system", 00:09:59.045 "dma_device_type": 1 00:09:59.045 }, 00:09:59.045 { 00:09:59.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.045 "dma_device_type": 2 00:09:59.045 } 00:09:59.045 ], 00:09:59.045 "driver_specific": {} 00:09:59.045 } 00:09:59.045 ] 00:09:59.045 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.045 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:59.045 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:59.045 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:59.045 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:59.045 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.045 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.045 BaseBdev3 00:09:59.045 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.045 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:59.045 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:59.045 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:59.045 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:59.045 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:59.045 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:59.045 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:59.045 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.046 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.046 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.046 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:59.046 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.046 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.046 [ 00:09:59.046 { 00:09:59.046 "name": "BaseBdev3", 00:09:59.046 "aliases": [ 00:09:59.046 "f6b3dab9-30ce-4b56-ae9c-5e252b849ca7" 00:09:59.046 ], 00:09:59.046 "product_name": "Malloc disk", 00:09:59.046 "block_size": 512, 00:09:59.046 "num_blocks": 65536, 00:09:59.046 "uuid": "f6b3dab9-30ce-4b56-ae9c-5e252b849ca7", 00:09:59.046 "assigned_rate_limits": { 00:09:59.046 "rw_ios_per_sec": 0, 00:09:59.046 "rw_mbytes_per_sec": 0, 00:09:59.046 "r_mbytes_per_sec": 0, 00:09:59.046 "w_mbytes_per_sec": 0 00:09:59.046 }, 00:09:59.046 "claimed": false, 00:09:59.046 "zoned": false, 00:09:59.046 "supported_io_types": { 00:09:59.046 "read": true, 00:09:59.046 "write": true, 00:09:59.046 "unmap": true, 00:09:59.046 "flush": true, 00:09:59.046 "reset": true, 00:09:59.046 "nvme_admin": false, 00:09:59.046 "nvme_io": false, 00:09:59.046 "nvme_io_md": false, 00:09:59.046 "write_zeroes": true, 00:09:59.046 "zcopy": true, 00:09:59.046 "get_zone_info": false, 00:09:59.046 "zone_management": false, 00:09:59.046 "zone_append": false, 00:09:59.046 "compare": false, 00:09:59.046 "compare_and_write": false, 00:09:59.046 "abort": true, 00:09:59.046 "seek_hole": false, 00:09:59.046 "seek_data": false, 00:09:59.046 "copy": true, 00:09:59.046 "nvme_iov_md": false 00:09:59.046 }, 00:09:59.046 "memory_domains": [ 00:09:59.046 { 00:09:59.046 "dma_device_id": "system", 00:09:59.046 "dma_device_type": 1 00:09:59.046 }, 00:09:59.046 { 00:09:59.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.046 "dma_device_type": 2 00:09:59.046 } 00:09:59.046 ], 00:09:59.046 "driver_specific": {} 00:09:59.046 } 00:09:59.046 ] 00:09:59.046 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.046 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:59.046 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:59.046 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:59.046 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:59.046 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.046 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.046 BaseBdev4 00:09:59.046 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.046 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:59.046 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:59.046 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:59.046 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:59.046 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:59.046 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:59.046 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:59.046 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.046 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.046 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.046 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:59.046 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.046 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.046 [ 00:09:59.046 { 00:09:59.046 "name": "BaseBdev4", 00:09:59.046 "aliases": [ 00:09:59.046 "5a9f68a3-b7a9-4b1c-921a-3d729e61d8db" 00:09:59.046 ], 00:09:59.046 "product_name": "Malloc disk", 00:09:59.046 "block_size": 512, 00:09:59.046 "num_blocks": 65536, 00:09:59.046 "uuid": "5a9f68a3-b7a9-4b1c-921a-3d729e61d8db", 00:09:59.046 "assigned_rate_limits": { 00:09:59.046 "rw_ios_per_sec": 0, 00:09:59.046 "rw_mbytes_per_sec": 0, 00:09:59.046 "r_mbytes_per_sec": 0, 00:09:59.046 "w_mbytes_per_sec": 0 00:09:59.046 }, 00:09:59.046 "claimed": false, 00:09:59.046 "zoned": false, 00:09:59.046 "supported_io_types": { 00:09:59.046 "read": true, 00:09:59.046 "write": true, 00:09:59.046 "unmap": true, 00:09:59.046 "flush": true, 00:09:59.046 "reset": true, 00:09:59.046 "nvme_admin": false, 00:09:59.046 "nvme_io": false, 00:09:59.046 "nvme_io_md": false, 00:09:59.046 "write_zeroes": true, 00:09:59.046 "zcopy": true, 00:09:59.046 "get_zone_info": false, 00:09:59.046 "zone_management": false, 00:09:59.046 "zone_append": false, 00:09:59.046 "compare": false, 00:09:59.046 "compare_and_write": false, 00:09:59.046 "abort": true, 00:09:59.046 "seek_hole": false, 00:09:59.046 "seek_data": false, 00:09:59.046 "copy": true, 00:09:59.046 "nvme_iov_md": false 00:09:59.046 }, 00:09:59.046 "memory_domains": [ 00:09:59.046 { 00:09:59.046 "dma_device_id": "system", 00:09:59.046 "dma_device_type": 1 00:09:59.046 }, 00:09:59.046 { 00:09:59.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.046 "dma_device_type": 2 00:09:59.046 } 00:09:59.046 ], 00:09:59.046 "driver_specific": {} 00:09:59.046 } 00:09:59.046 ] 00:09:59.046 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.046 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:59.046 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:59.046 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:59.046 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:59.046 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.046 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.046 [2024-12-13 04:25:58.948408] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:59.046 [2024-12-13 04:25:58.948529] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:59.046 [2024-12-13 04:25:58.948594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:59.046 [2024-12-13 04:25:58.950602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:59.046 [2024-12-13 04:25:58.950687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:59.046 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.046 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:59.046 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.046 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.046 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:59.046 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.046 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:59.046 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.046 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.046 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.046 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.046 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.046 04:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.046 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.046 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.046 04:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.046 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.046 "name": "Existed_Raid", 00:09:59.046 "uuid": "3b7f6400-37c2-4cd4-852c-f4579ee7701a", 00:09:59.046 "strip_size_kb": 64, 00:09:59.046 "state": "configuring", 00:09:59.046 "raid_level": "raid0", 00:09:59.046 "superblock": true, 00:09:59.046 "num_base_bdevs": 4, 00:09:59.046 "num_base_bdevs_discovered": 3, 00:09:59.046 "num_base_bdevs_operational": 4, 00:09:59.046 "base_bdevs_list": [ 00:09:59.046 { 00:09:59.046 "name": "BaseBdev1", 00:09:59.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.046 "is_configured": false, 00:09:59.046 "data_offset": 0, 00:09:59.046 "data_size": 0 00:09:59.046 }, 00:09:59.046 { 00:09:59.046 "name": "BaseBdev2", 00:09:59.046 "uuid": "bab8f52b-ab5b-4f43-b30a-a7fb6ce11480", 00:09:59.046 "is_configured": true, 00:09:59.046 "data_offset": 2048, 00:09:59.046 "data_size": 63488 00:09:59.046 }, 00:09:59.046 { 00:09:59.046 "name": "BaseBdev3", 00:09:59.046 "uuid": "f6b3dab9-30ce-4b56-ae9c-5e252b849ca7", 00:09:59.046 "is_configured": true, 00:09:59.046 "data_offset": 2048, 00:09:59.046 "data_size": 63488 00:09:59.046 }, 00:09:59.046 { 00:09:59.046 "name": "BaseBdev4", 00:09:59.046 "uuid": "5a9f68a3-b7a9-4b1c-921a-3d729e61d8db", 00:09:59.046 "is_configured": true, 00:09:59.046 "data_offset": 2048, 00:09:59.047 "data_size": 63488 00:09:59.047 } 00:09:59.047 ] 00:09:59.047 }' 00:09:59.047 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.047 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.615 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:59.615 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.615 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.615 [2024-12-13 04:25:59.383641] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:59.615 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.615 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:59.615 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.615 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.615 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:59.615 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.615 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:59.615 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.615 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.615 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.615 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.615 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.615 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.615 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.615 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.615 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.615 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.615 "name": "Existed_Raid", 00:09:59.615 "uuid": "3b7f6400-37c2-4cd4-852c-f4579ee7701a", 00:09:59.615 "strip_size_kb": 64, 00:09:59.615 "state": "configuring", 00:09:59.615 "raid_level": "raid0", 00:09:59.615 "superblock": true, 00:09:59.615 "num_base_bdevs": 4, 00:09:59.615 "num_base_bdevs_discovered": 2, 00:09:59.615 "num_base_bdevs_operational": 4, 00:09:59.615 "base_bdevs_list": [ 00:09:59.615 { 00:09:59.615 "name": "BaseBdev1", 00:09:59.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.616 "is_configured": false, 00:09:59.616 "data_offset": 0, 00:09:59.616 "data_size": 0 00:09:59.616 }, 00:09:59.616 { 00:09:59.616 "name": null, 00:09:59.616 "uuid": "bab8f52b-ab5b-4f43-b30a-a7fb6ce11480", 00:09:59.616 "is_configured": false, 00:09:59.616 "data_offset": 0, 00:09:59.616 "data_size": 63488 00:09:59.616 }, 00:09:59.616 { 00:09:59.616 "name": "BaseBdev3", 00:09:59.616 "uuid": "f6b3dab9-30ce-4b56-ae9c-5e252b849ca7", 00:09:59.616 "is_configured": true, 00:09:59.616 "data_offset": 2048, 00:09:59.616 "data_size": 63488 00:09:59.616 }, 00:09:59.616 { 00:09:59.616 "name": "BaseBdev4", 00:09:59.616 "uuid": "5a9f68a3-b7a9-4b1c-921a-3d729e61d8db", 00:09:59.616 "is_configured": true, 00:09:59.616 "data_offset": 2048, 00:09:59.616 "data_size": 63488 00:09:59.616 } 00:09:59.616 ] 00:09:59.616 }' 00:09:59.616 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.616 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.876 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.876 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.876 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.876 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:59.876 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.876 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:59.876 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:59.876 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.876 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.876 BaseBdev1 00:09:59.876 [2024-12-13 04:25:59.851459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:59.876 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.876 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:59.876 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:59.876 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:59.876 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:59.876 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:59.876 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:59.876 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:59.876 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.876 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.876 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.876 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:59.876 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.876 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.876 [ 00:09:59.876 { 00:09:59.876 "name": "BaseBdev1", 00:09:59.876 "aliases": [ 00:09:59.876 "c8a4c43f-250d-49f4-a9aa-79e821ca6a70" 00:09:59.876 ], 00:09:59.876 "product_name": "Malloc disk", 00:09:59.876 "block_size": 512, 00:09:59.876 "num_blocks": 65536, 00:09:59.876 "uuid": "c8a4c43f-250d-49f4-a9aa-79e821ca6a70", 00:09:59.876 "assigned_rate_limits": { 00:09:59.876 "rw_ios_per_sec": 0, 00:09:59.876 "rw_mbytes_per_sec": 0, 00:09:59.876 "r_mbytes_per_sec": 0, 00:09:59.876 "w_mbytes_per_sec": 0 00:09:59.876 }, 00:09:59.876 "claimed": true, 00:09:59.876 "claim_type": "exclusive_write", 00:09:59.876 "zoned": false, 00:09:59.876 "supported_io_types": { 00:09:59.876 "read": true, 00:09:59.876 "write": true, 00:09:59.876 "unmap": true, 00:09:59.876 "flush": true, 00:09:59.876 "reset": true, 00:09:59.876 "nvme_admin": false, 00:09:59.876 "nvme_io": false, 00:09:59.876 "nvme_io_md": false, 00:09:59.876 "write_zeroes": true, 00:09:59.876 "zcopy": true, 00:09:59.876 "get_zone_info": false, 00:09:59.876 "zone_management": false, 00:09:59.876 "zone_append": false, 00:09:59.876 "compare": false, 00:09:59.876 "compare_and_write": false, 00:09:59.876 "abort": true, 00:09:59.876 "seek_hole": false, 00:09:59.876 "seek_data": false, 00:09:59.876 "copy": true, 00:09:59.876 "nvme_iov_md": false 00:09:59.876 }, 00:09:59.876 "memory_domains": [ 00:09:59.876 { 00:09:59.876 "dma_device_id": "system", 00:09:59.876 "dma_device_type": 1 00:09:59.876 }, 00:09:59.876 { 00:09:59.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.876 "dma_device_type": 2 00:09:59.876 } 00:09:59.876 ], 00:09:59.876 "driver_specific": {} 00:09:59.876 } 00:09:59.876 ] 00:09:59.876 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.876 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:00.136 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:00.136 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.136 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.136 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:00.136 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.136 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:00.136 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.136 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.136 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.136 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.136 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.136 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.136 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.136 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.136 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.136 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.136 "name": "Existed_Raid", 00:10:00.136 "uuid": "3b7f6400-37c2-4cd4-852c-f4579ee7701a", 00:10:00.136 "strip_size_kb": 64, 00:10:00.136 "state": "configuring", 00:10:00.136 "raid_level": "raid0", 00:10:00.136 "superblock": true, 00:10:00.136 "num_base_bdevs": 4, 00:10:00.136 "num_base_bdevs_discovered": 3, 00:10:00.136 "num_base_bdevs_operational": 4, 00:10:00.136 "base_bdevs_list": [ 00:10:00.136 { 00:10:00.136 "name": "BaseBdev1", 00:10:00.136 "uuid": "c8a4c43f-250d-49f4-a9aa-79e821ca6a70", 00:10:00.137 "is_configured": true, 00:10:00.137 "data_offset": 2048, 00:10:00.137 "data_size": 63488 00:10:00.137 }, 00:10:00.137 { 00:10:00.137 "name": null, 00:10:00.137 "uuid": "bab8f52b-ab5b-4f43-b30a-a7fb6ce11480", 00:10:00.137 "is_configured": false, 00:10:00.137 "data_offset": 0, 00:10:00.137 "data_size": 63488 00:10:00.137 }, 00:10:00.137 { 00:10:00.137 "name": "BaseBdev3", 00:10:00.137 "uuid": "f6b3dab9-30ce-4b56-ae9c-5e252b849ca7", 00:10:00.137 "is_configured": true, 00:10:00.137 "data_offset": 2048, 00:10:00.137 "data_size": 63488 00:10:00.137 }, 00:10:00.137 { 00:10:00.137 "name": "BaseBdev4", 00:10:00.137 "uuid": "5a9f68a3-b7a9-4b1c-921a-3d729e61d8db", 00:10:00.137 "is_configured": true, 00:10:00.137 "data_offset": 2048, 00:10:00.137 "data_size": 63488 00:10:00.137 } 00:10:00.137 ] 00:10:00.137 }' 00:10:00.137 04:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.137 04:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.397 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.397 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:00.397 04:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.397 04:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.397 04:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.397 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:00.397 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:00.397 04:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.397 04:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.397 [2024-12-13 04:26:00.386561] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:00.397 04:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.397 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:00.397 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.397 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.397 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:00.397 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.397 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:00.397 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.397 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.397 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.397 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.397 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.397 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.397 04:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.397 04:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.656 04:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.656 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.656 "name": "Existed_Raid", 00:10:00.656 "uuid": "3b7f6400-37c2-4cd4-852c-f4579ee7701a", 00:10:00.656 "strip_size_kb": 64, 00:10:00.656 "state": "configuring", 00:10:00.656 "raid_level": "raid0", 00:10:00.656 "superblock": true, 00:10:00.656 "num_base_bdevs": 4, 00:10:00.656 "num_base_bdevs_discovered": 2, 00:10:00.656 "num_base_bdevs_operational": 4, 00:10:00.656 "base_bdevs_list": [ 00:10:00.656 { 00:10:00.656 "name": "BaseBdev1", 00:10:00.656 "uuid": "c8a4c43f-250d-49f4-a9aa-79e821ca6a70", 00:10:00.656 "is_configured": true, 00:10:00.656 "data_offset": 2048, 00:10:00.656 "data_size": 63488 00:10:00.656 }, 00:10:00.656 { 00:10:00.656 "name": null, 00:10:00.656 "uuid": "bab8f52b-ab5b-4f43-b30a-a7fb6ce11480", 00:10:00.656 "is_configured": false, 00:10:00.656 "data_offset": 0, 00:10:00.656 "data_size": 63488 00:10:00.656 }, 00:10:00.656 { 00:10:00.656 "name": null, 00:10:00.656 "uuid": "f6b3dab9-30ce-4b56-ae9c-5e252b849ca7", 00:10:00.656 "is_configured": false, 00:10:00.656 "data_offset": 0, 00:10:00.656 "data_size": 63488 00:10:00.656 }, 00:10:00.656 { 00:10:00.656 "name": "BaseBdev4", 00:10:00.656 "uuid": "5a9f68a3-b7a9-4b1c-921a-3d729e61d8db", 00:10:00.656 "is_configured": true, 00:10:00.656 "data_offset": 2048, 00:10:00.656 "data_size": 63488 00:10:00.656 } 00:10:00.656 ] 00:10:00.656 }' 00:10:00.656 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.656 04:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.917 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.917 04:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.917 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:00.917 04:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.917 04:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.917 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:00.917 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:00.917 04:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.917 04:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.917 [2024-12-13 04:26:00.857778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:00.917 04:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.917 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:00.917 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.917 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.917 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:00.917 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.917 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:00.917 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.917 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.917 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.917 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.917 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.917 04:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.917 04:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.917 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.917 04:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.917 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.917 "name": "Existed_Raid", 00:10:00.917 "uuid": "3b7f6400-37c2-4cd4-852c-f4579ee7701a", 00:10:00.917 "strip_size_kb": 64, 00:10:00.917 "state": "configuring", 00:10:00.917 "raid_level": "raid0", 00:10:00.917 "superblock": true, 00:10:00.917 "num_base_bdevs": 4, 00:10:00.917 "num_base_bdevs_discovered": 3, 00:10:00.917 "num_base_bdevs_operational": 4, 00:10:00.917 "base_bdevs_list": [ 00:10:00.917 { 00:10:00.917 "name": "BaseBdev1", 00:10:00.917 "uuid": "c8a4c43f-250d-49f4-a9aa-79e821ca6a70", 00:10:00.917 "is_configured": true, 00:10:00.917 "data_offset": 2048, 00:10:00.917 "data_size": 63488 00:10:00.917 }, 00:10:00.917 { 00:10:00.917 "name": null, 00:10:00.917 "uuid": "bab8f52b-ab5b-4f43-b30a-a7fb6ce11480", 00:10:00.917 "is_configured": false, 00:10:00.917 "data_offset": 0, 00:10:00.917 "data_size": 63488 00:10:00.917 }, 00:10:00.917 { 00:10:00.917 "name": "BaseBdev3", 00:10:00.917 "uuid": "f6b3dab9-30ce-4b56-ae9c-5e252b849ca7", 00:10:00.917 "is_configured": true, 00:10:00.917 "data_offset": 2048, 00:10:00.917 "data_size": 63488 00:10:00.917 }, 00:10:00.917 { 00:10:00.917 "name": "BaseBdev4", 00:10:00.917 "uuid": "5a9f68a3-b7a9-4b1c-921a-3d729e61d8db", 00:10:00.917 "is_configured": true, 00:10:00.917 "data_offset": 2048, 00:10:00.917 "data_size": 63488 00:10:00.917 } 00:10:00.917 ] 00:10:00.917 }' 00:10:00.917 04:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.917 04:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.486 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.486 04:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.486 04:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.486 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:01.486 04:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.486 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:01.486 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:01.486 04:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.486 04:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.486 [2024-12-13 04:26:01.329021] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:01.486 04:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.486 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:01.486 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.486 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.486 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:01.486 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.486 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:01.486 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.486 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.486 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.486 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.486 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.486 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.486 04:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.486 04:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.486 04:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.486 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.486 "name": "Existed_Raid", 00:10:01.486 "uuid": "3b7f6400-37c2-4cd4-852c-f4579ee7701a", 00:10:01.486 "strip_size_kb": 64, 00:10:01.486 "state": "configuring", 00:10:01.486 "raid_level": "raid0", 00:10:01.486 "superblock": true, 00:10:01.486 "num_base_bdevs": 4, 00:10:01.486 "num_base_bdevs_discovered": 2, 00:10:01.486 "num_base_bdevs_operational": 4, 00:10:01.486 "base_bdevs_list": [ 00:10:01.486 { 00:10:01.486 "name": null, 00:10:01.486 "uuid": "c8a4c43f-250d-49f4-a9aa-79e821ca6a70", 00:10:01.486 "is_configured": false, 00:10:01.486 "data_offset": 0, 00:10:01.486 "data_size": 63488 00:10:01.486 }, 00:10:01.486 { 00:10:01.486 "name": null, 00:10:01.486 "uuid": "bab8f52b-ab5b-4f43-b30a-a7fb6ce11480", 00:10:01.486 "is_configured": false, 00:10:01.486 "data_offset": 0, 00:10:01.486 "data_size": 63488 00:10:01.486 }, 00:10:01.486 { 00:10:01.486 "name": "BaseBdev3", 00:10:01.486 "uuid": "f6b3dab9-30ce-4b56-ae9c-5e252b849ca7", 00:10:01.486 "is_configured": true, 00:10:01.486 "data_offset": 2048, 00:10:01.486 "data_size": 63488 00:10:01.486 }, 00:10:01.486 { 00:10:01.486 "name": "BaseBdev4", 00:10:01.486 "uuid": "5a9f68a3-b7a9-4b1c-921a-3d729e61d8db", 00:10:01.486 "is_configured": true, 00:10:01.486 "data_offset": 2048, 00:10:01.486 "data_size": 63488 00:10:01.486 } 00:10:01.486 ] 00:10:01.486 }' 00:10:01.486 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.486 04:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.056 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:02.056 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.056 04:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.056 04:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.056 04:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.056 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:02.056 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:02.056 04:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.056 04:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.056 [2024-12-13 04:26:01.804343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:02.056 04:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.056 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:02.056 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.056 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.056 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:02.056 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.056 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:02.056 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.056 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.056 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.056 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.056 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.056 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.056 04:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.056 04:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.056 04:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.056 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.056 "name": "Existed_Raid", 00:10:02.056 "uuid": "3b7f6400-37c2-4cd4-852c-f4579ee7701a", 00:10:02.056 "strip_size_kb": 64, 00:10:02.056 "state": "configuring", 00:10:02.056 "raid_level": "raid0", 00:10:02.056 "superblock": true, 00:10:02.056 "num_base_bdevs": 4, 00:10:02.056 "num_base_bdevs_discovered": 3, 00:10:02.056 "num_base_bdevs_operational": 4, 00:10:02.056 "base_bdevs_list": [ 00:10:02.056 { 00:10:02.056 "name": null, 00:10:02.056 "uuid": "c8a4c43f-250d-49f4-a9aa-79e821ca6a70", 00:10:02.056 "is_configured": false, 00:10:02.056 "data_offset": 0, 00:10:02.056 "data_size": 63488 00:10:02.056 }, 00:10:02.056 { 00:10:02.056 "name": "BaseBdev2", 00:10:02.056 "uuid": "bab8f52b-ab5b-4f43-b30a-a7fb6ce11480", 00:10:02.056 "is_configured": true, 00:10:02.056 "data_offset": 2048, 00:10:02.056 "data_size": 63488 00:10:02.056 }, 00:10:02.056 { 00:10:02.056 "name": "BaseBdev3", 00:10:02.056 "uuid": "f6b3dab9-30ce-4b56-ae9c-5e252b849ca7", 00:10:02.056 "is_configured": true, 00:10:02.056 "data_offset": 2048, 00:10:02.056 "data_size": 63488 00:10:02.056 }, 00:10:02.056 { 00:10:02.056 "name": "BaseBdev4", 00:10:02.056 "uuid": "5a9f68a3-b7a9-4b1c-921a-3d729e61d8db", 00:10:02.056 "is_configured": true, 00:10:02.056 "data_offset": 2048, 00:10:02.056 "data_size": 63488 00:10:02.056 } 00:10:02.056 ] 00:10:02.056 }' 00:10:02.056 04:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.056 04:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.316 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.316 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:02.316 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.316 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.316 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.316 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:02.316 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:02.316 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.316 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.316 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.316 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.316 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c8a4c43f-250d-49f4-a9aa-79e821ca6a70 00:10:02.316 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.316 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.576 [2024-12-13 04:26:02.340338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:02.576 [2024-12-13 04:26:02.340579] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:10:02.576 [2024-12-13 04:26:02.340602] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:02.576 [2024-12-13 04:26:02.340890] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:10:02.576 [2024-12-13 04:26:02.341023] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:10:02.576 [2024-12-13 04:26:02.341040] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:10:02.576 [2024-12-13 04:26:02.341145] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:02.576 NewBaseBdev 00:10:02.576 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.576 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:02.576 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:02.576 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:02.576 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:02.576 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:02.576 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:02.576 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:02.576 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.576 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.576 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.576 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:02.576 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.576 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.576 [ 00:10:02.576 { 00:10:02.576 "name": "NewBaseBdev", 00:10:02.576 "aliases": [ 00:10:02.576 "c8a4c43f-250d-49f4-a9aa-79e821ca6a70" 00:10:02.576 ], 00:10:02.576 "product_name": "Malloc disk", 00:10:02.576 "block_size": 512, 00:10:02.576 "num_blocks": 65536, 00:10:02.576 "uuid": "c8a4c43f-250d-49f4-a9aa-79e821ca6a70", 00:10:02.576 "assigned_rate_limits": { 00:10:02.576 "rw_ios_per_sec": 0, 00:10:02.576 "rw_mbytes_per_sec": 0, 00:10:02.576 "r_mbytes_per_sec": 0, 00:10:02.576 "w_mbytes_per_sec": 0 00:10:02.576 }, 00:10:02.576 "claimed": true, 00:10:02.576 "claim_type": "exclusive_write", 00:10:02.576 "zoned": false, 00:10:02.576 "supported_io_types": { 00:10:02.576 "read": true, 00:10:02.576 "write": true, 00:10:02.576 "unmap": true, 00:10:02.576 "flush": true, 00:10:02.576 "reset": true, 00:10:02.577 "nvme_admin": false, 00:10:02.577 "nvme_io": false, 00:10:02.577 "nvme_io_md": false, 00:10:02.577 "write_zeroes": true, 00:10:02.577 "zcopy": true, 00:10:02.577 "get_zone_info": false, 00:10:02.577 "zone_management": false, 00:10:02.577 "zone_append": false, 00:10:02.577 "compare": false, 00:10:02.577 "compare_and_write": false, 00:10:02.577 "abort": true, 00:10:02.577 "seek_hole": false, 00:10:02.577 "seek_data": false, 00:10:02.577 "copy": true, 00:10:02.577 "nvme_iov_md": false 00:10:02.577 }, 00:10:02.577 "memory_domains": [ 00:10:02.577 { 00:10:02.577 "dma_device_id": "system", 00:10:02.577 "dma_device_type": 1 00:10:02.577 }, 00:10:02.577 { 00:10:02.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.577 "dma_device_type": 2 00:10:02.577 } 00:10:02.577 ], 00:10:02.577 "driver_specific": {} 00:10:02.577 } 00:10:02.577 ] 00:10:02.577 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.577 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:02.577 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:02.577 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.577 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:02.577 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:02.577 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.577 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:02.577 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.577 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.577 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.577 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.577 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.577 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.577 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.577 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.577 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.577 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.577 "name": "Existed_Raid", 00:10:02.577 "uuid": "3b7f6400-37c2-4cd4-852c-f4579ee7701a", 00:10:02.577 "strip_size_kb": 64, 00:10:02.577 "state": "online", 00:10:02.577 "raid_level": "raid0", 00:10:02.577 "superblock": true, 00:10:02.577 "num_base_bdevs": 4, 00:10:02.577 "num_base_bdevs_discovered": 4, 00:10:02.577 "num_base_bdevs_operational": 4, 00:10:02.577 "base_bdevs_list": [ 00:10:02.577 { 00:10:02.577 "name": "NewBaseBdev", 00:10:02.577 "uuid": "c8a4c43f-250d-49f4-a9aa-79e821ca6a70", 00:10:02.577 "is_configured": true, 00:10:02.577 "data_offset": 2048, 00:10:02.577 "data_size": 63488 00:10:02.577 }, 00:10:02.577 { 00:10:02.577 "name": "BaseBdev2", 00:10:02.577 "uuid": "bab8f52b-ab5b-4f43-b30a-a7fb6ce11480", 00:10:02.577 "is_configured": true, 00:10:02.577 "data_offset": 2048, 00:10:02.577 "data_size": 63488 00:10:02.577 }, 00:10:02.577 { 00:10:02.577 "name": "BaseBdev3", 00:10:02.577 "uuid": "f6b3dab9-30ce-4b56-ae9c-5e252b849ca7", 00:10:02.577 "is_configured": true, 00:10:02.577 "data_offset": 2048, 00:10:02.577 "data_size": 63488 00:10:02.577 }, 00:10:02.577 { 00:10:02.577 "name": "BaseBdev4", 00:10:02.577 "uuid": "5a9f68a3-b7a9-4b1c-921a-3d729e61d8db", 00:10:02.577 "is_configured": true, 00:10:02.577 "data_offset": 2048, 00:10:02.577 "data_size": 63488 00:10:02.577 } 00:10:02.577 ] 00:10:02.577 }' 00:10:02.577 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.577 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.837 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:02.837 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:02.837 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:02.837 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:02.837 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:02.837 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:02.837 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:02.837 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:02.837 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.837 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.837 [2024-12-13 04:26:02.851887] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:03.097 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.097 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:03.097 "name": "Existed_Raid", 00:10:03.097 "aliases": [ 00:10:03.097 "3b7f6400-37c2-4cd4-852c-f4579ee7701a" 00:10:03.097 ], 00:10:03.097 "product_name": "Raid Volume", 00:10:03.097 "block_size": 512, 00:10:03.097 "num_blocks": 253952, 00:10:03.097 "uuid": "3b7f6400-37c2-4cd4-852c-f4579ee7701a", 00:10:03.097 "assigned_rate_limits": { 00:10:03.097 "rw_ios_per_sec": 0, 00:10:03.097 "rw_mbytes_per_sec": 0, 00:10:03.097 "r_mbytes_per_sec": 0, 00:10:03.097 "w_mbytes_per_sec": 0 00:10:03.097 }, 00:10:03.097 "claimed": false, 00:10:03.097 "zoned": false, 00:10:03.097 "supported_io_types": { 00:10:03.097 "read": true, 00:10:03.097 "write": true, 00:10:03.097 "unmap": true, 00:10:03.097 "flush": true, 00:10:03.097 "reset": true, 00:10:03.097 "nvme_admin": false, 00:10:03.097 "nvme_io": false, 00:10:03.097 "nvme_io_md": false, 00:10:03.097 "write_zeroes": true, 00:10:03.097 "zcopy": false, 00:10:03.097 "get_zone_info": false, 00:10:03.097 "zone_management": false, 00:10:03.097 "zone_append": false, 00:10:03.097 "compare": false, 00:10:03.097 "compare_and_write": false, 00:10:03.097 "abort": false, 00:10:03.097 "seek_hole": false, 00:10:03.097 "seek_data": false, 00:10:03.097 "copy": false, 00:10:03.097 "nvme_iov_md": false 00:10:03.097 }, 00:10:03.097 "memory_domains": [ 00:10:03.097 { 00:10:03.097 "dma_device_id": "system", 00:10:03.097 "dma_device_type": 1 00:10:03.097 }, 00:10:03.097 { 00:10:03.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.097 "dma_device_type": 2 00:10:03.097 }, 00:10:03.097 { 00:10:03.097 "dma_device_id": "system", 00:10:03.097 "dma_device_type": 1 00:10:03.097 }, 00:10:03.097 { 00:10:03.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.097 "dma_device_type": 2 00:10:03.097 }, 00:10:03.097 { 00:10:03.097 "dma_device_id": "system", 00:10:03.097 "dma_device_type": 1 00:10:03.097 }, 00:10:03.097 { 00:10:03.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.097 "dma_device_type": 2 00:10:03.097 }, 00:10:03.097 { 00:10:03.097 "dma_device_id": "system", 00:10:03.097 "dma_device_type": 1 00:10:03.097 }, 00:10:03.097 { 00:10:03.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.098 "dma_device_type": 2 00:10:03.098 } 00:10:03.098 ], 00:10:03.098 "driver_specific": { 00:10:03.098 "raid": { 00:10:03.098 "uuid": "3b7f6400-37c2-4cd4-852c-f4579ee7701a", 00:10:03.098 "strip_size_kb": 64, 00:10:03.098 "state": "online", 00:10:03.098 "raid_level": "raid0", 00:10:03.098 "superblock": true, 00:10:03.098 "num_base_bdevs": 4, 00:10:03.098 "num_base_bdevs_discovered": 4, 00:10:03.098 "num_base_bdevs_operational": 4, 00:10:03.098 "base_bdevs_list": [ 00:10:03.098 { 00:10:03.098 "name": "NewBaseBdev", 00:10:03.098 "uuid": "c8a4c43f-250d-49f4-a9aa-79e821ca6a70", 00:10:03.098 "is_configured": true, 00:10:03.098 "data_offset": 2048, 00:10:03.098 "data_size": 63488 00:10:03.098 }, 00:10:03.098 { 00:10:03.098 "name": "BaseBdev2", 00:10:03.098 "uuid": "bab8f52b-ab5b-4f43-b30a-a7fb6ce11480", 00:10:03.098 "is_configured": true, 00:10:03.098 "data_offset": 2048, 00:10:03.098 "data_size": 63488 00:10:03.098 }, 00:10:03.098 { 00:10:03.098 "name": "BaseBdev3", 00:10:03.098 "uuid": "f6b3dab9-30ce-4b56-ae9c-5e252b849ca7", 00:10:03.098 "is_configured": true, 00:10:03.098 "data_offset": 2048, 00:10:03.098 "data_size": 63488 00:10:03.098 }, 00:10:03.098 { 00:10:03.098 "name": "BaseBdev4", 00:10:03.098 "uuid": "5a9f68a3-b7a9-4b1c-921a-3d729e61d8db", 00:10:03.098 "is_configured": true, 00:10:03.098 "data_offset": 2048, 00:10:03.098 "data_size": 63488 00:10:03.098 } 00:10:03.098 ] 00:10:03.098 } 00:10:03.098 } 00:10:03.098 }' 00:10:03.098 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:03.098 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:03.098 BaseBdev2 00:10:03.098 BaseBdev3 00:10:03.098 BaseBdev4' 00:10:03.098 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.098 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:03.098 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.098 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.098 04:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:03.098 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.098 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.098 04:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.098 04:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.098 04:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.098 04:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.098 04:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.098 04:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:03.098 04:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.098 04:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.098 04:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.098 04:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.098 04:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.098 04:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.098 04:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:03.098 04:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.098 04:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.098 04:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.098 04:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.098 04:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.098 04:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.098 04:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.098 04:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.098 04:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:03.098 04:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.098 04:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.374 04:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.374 04:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.375 04:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.375 04:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:03.375 04:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.375 04:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.375 [2024-12-13 04:26:03.127029] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:03.375 [2024-12-13 04:26:03.127068] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:03.375 [2024-12-13 04:26:03.127153] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:03.375 [2024-12-13 04:26:03.127233] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:03.375 [2024-12-13 04:26:03.127256] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:10:03.375 04:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.375 04:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 82687 00:10:03.375 04:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82687 ']' 00:10:03.375 04:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 82687 00:10:03.375 04:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:03.375 04:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:03.375 04:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82687 00:10:03.375 killing process with pid 82687 00:10:03.375 04:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:03.375 04:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:03.375 04:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82687' 00:10:03.375 04:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 82687 00:10:03.375 [2024-12-13 04:26:03.163120] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:03.375 04:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 82687 00:10:03.375 [2024-12-13 04:26:03.240647] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:03.652 04:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:03.652 00:10:03.652 real 0m9.634s 00:10:03.652 user 0m16.087s 00:10:03.652 sys 0m2.146s 00:10:03.652 04:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.652 04:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.652 ************************************ 00:10:03.652 END TEST raid_state_function_test_sb 00:10:03.652 ************************************ 00:10:03.652 04:26:03 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:10:03.652 04:26:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:03.652 04:26:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:03.653 04:26:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:03.653 ************************************ 00:10:03.653 START TEST raid_superblock_test 00:10:03.653 ************************************ 00:10:03.653 04:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:10:03.653 04:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:03.653 04:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:03.653 04:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:03.653 04:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:03.653 04:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:03.653 04:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:03.653 04:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:03.653 04:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:03.653 04:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:03.653 04:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:03.653 04:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:03.653 04:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:03.653 04:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:03.653 04:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:03.653 04:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:03.653 04:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:03.653 04:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83335 00:10:03.653 04:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83335 00:10:03.653 04:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:03.653 04:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 83335 ']' 00:10:03.653 04:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.653 04:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:03.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.653 04:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.653 04:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:03.653 04:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.913 [2024-12-13 04:26:03.725437] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:10:03.913 [2024-12-13 04:26:03.725586] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83335 ] 00:10:03.913 [2024-12-13 04:26:03.879504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.913 [2024-12-13 04:26:03.918594] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.172 [2024-12-13 04:26:03.994608] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:04.172 [2024-12-13 04:26:03.994649] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:04.742 04:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:04.742 04:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:04.742 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:04.742 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:04.742 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:04.742 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:04.742 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:04.742 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:04.742 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:04.742 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:04.742 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:04.742 04:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.742 04:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.742 malloc1 00:10:04.742 04:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.742 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:04.742 04:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.742 04:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.742 [2024-12-13 04:26:04.583623] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:04.742 [2024-12-13 04:26:04.583685] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.742 [2024-12-13 04:26:04.583705] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:10:04.742 [2024-12-13 04:26:04.583727] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.742 [2024-12-13 04:26:04.586084] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.742 [2024-12-13 04:26:04.586122] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:04.742 pt1 00:10:04.742 04:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.742 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:04.742 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.743 malloc2 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.743 [2024-12-13 04:26:04.618028] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:04.743 [2024-12-13 04:26:04.618085] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.743 [2024-12-13 04:26:04.618103] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:04.743 [2024-12-13 04:26:04.618113] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.743 [2024-12-13 04:26:04.620385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.743 [2024-12-13 04:26:04.620421] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:04.743 pt2 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.743 malloc3 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.743 [2024-12-13 04:26:04.652428] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:04.743 [2024-12-13 04:26:04.652515] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.743 [2024-12-13 04:26:04.652536] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:04.743 [2024-12-13 04:26:04.652548] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.743 [2024-12-13 04:26:04.654956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.743 [2024-12-13 04:26:04.654990] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:04.743 pt3 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.743 malloc4 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.743 [2024-12-13 04:26:04.695382] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:04.743 [2024-12-13 04:26:04.695435] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.743 [2024-12-13 04:26:04.695476] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:04.743 [2024-12-13 04:26:04.695490] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.743 [2024-12-13 04:26:04.697816] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.743 [2024-12-13 04:26:04.697859] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:04.743 pt4 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.743 [2024-12-13 04:26:04.707392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:04.743 [2024-12-13 04:26:04.709530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:04.743 [2024-12-13 04:26:04.709597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:04.743 [2024-12-13 04:26:04.709670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:04.743 [2024-12-13 04:26:04.709815] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:10:04.743 [2024-12-13 04:26:04.709830] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:04.743 [2024-12-13 04:26:04.710085] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:04.743 [2024-12-13 04:26:04.710238] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:10:04.743 [2024-12-13 04:26:04.710254] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:10:04.743 [2024-12-13 04:26:04.710364] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.743 04:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.003 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.003 "name": "raid_bdev1", 00:10:05.003 "uuid": "6843f1ff-518a-4e1b-acd2-5c9f1ca7a649", 00:10:05.003 "strip_size_kb": 64, 00:10:05.003 "state": "online", 00:10:05.003 "raid_level": "raid0", 00:10:05.003 "superblock": true, 00:10:05.003 "num_base_bdevs": 4, 00:10:05.003 "num_base_bdevs_discovered": 4, 00:10:05.003 "num_base_bdevs_operational": 4, 00:10:05.003 "base_bdevs_list": [ 00:10:05.003 { 00:10:05.003 "name": "pt1", 00:10:05.003 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:05.003 "is_configured": true, 00:10:05.003 "data_offset": 2048, 00:10:05.003 "data_size": 63488 00:10:05.003 }, 00:10:05.003 { 00:10:05.003 "name": "pt2", 00:10:05.003 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:05.003 "is_configured": true, 00:10:05.003 "data_offset": 2048, 00:10:05.003 "data_size": 63488 00:10:05.003 }, 00:10:05.003 { 00:10:05.003 "name": "pt3", 00:10:05.003 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:05.003 "is_configured": true, 00:10:05.003 "data_offset": 2048, 00:10:05.003 "data_size": 63488 00:10:05.003 }, 00:10:05.003 { 00:10:05.003 "name": "pt4", 00:10:05.003 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:05.003 "is_configured": true, 00:10:05.003 "data_offset": 2048, 00:10:05.003 "data_size": 63488 00:10:05.003 } 00:10:05.003 ] 00:10:05.003 }' 00:10:05.003 04:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.003 04:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.261 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:05.261 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:05.261 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:05.261 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:05.261 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:05.261 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:05.261 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:05.261 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:05.261 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.261 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.261 [2024-12-13 04:26:05.198876] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:05.261 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.261 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:05.261 "name": "raid_bdev1", 00:10:05.261 "aliases": [ 00:10:05.261 "6843f1ff-518a-4e1b-acd2-5c9f1ca7a649" 00:10:05.261 ], 00:10:05.261 "product_name": "Raid Volume", 00:10:05.261 "block_size": 512, 00:10:05.261 "num_blocks": 253952, 00:10:05.261 "uuid": "6843f1ff-518a-4e1b-acd2-5c9f1ca7a649", 00:10:05.261 "assigned_rate_limits": { 00:10:05.261 "rw_ios_per_sec": 0, 00:10:05.261 "rw_mbytes_per_sec": 0, 00:10:05.261 "r_mbytes_per_sec": 0, 00:10:05.261 "w_mbytes_per_sec": 0 00:10:05.261 }, 00:10:05.261 "claimed": false, 00:10:05.261 "zoned": false, 00:10:05.261 "supported_io_types": { 00:10:05.261 "read": true, 00:10:05.261 "write": true, 00:10:05.261 "unmap": true, 00:10:05.261 "flush": true, 00:10:05.261 "reset": true, 00:10:05.261 "nvme_admin": false, 00:10:05.261 "nvme_io": false, 00:10:05.261 "nvme_io_md": false, 00:10:05.261 "write_zeroes": true, 00:10:05.261 "zcopy": false, 00:10:05.261 "get_zone_info": false, 00:10:05.261 "zone_management": false, 00:10:05.261 "zone_append": false, 00:10:05.261 "compare": false, 00:10:05.261 "compare_and_write": false, 00:10:05.261 "abort": false, 00:10:05.261 "seek_hole": false, 00:10:05.261 "seek_data": false, 00:10:05.261 "copy": false, 00:10:05.261 "nvme_iov_md": false 00:10:05.261 }, 00:10:05.261 "memory_domains": [ 00:10:05.261 { 00:10:05.261 "dma_device_id": "system", 00:10:05.261 "dma_device_type": 1 00:10:05.261 }, 00:10:05.261 { 00:10:05.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.261 "dma_device_type": 2 00:10:05.261 }, 00:10:05.261 { 00:10:05.261 "dma_device_id": "system", 00:10:05.261 "dma_device_type": 1 00:10:05.261 }, 00:10:05.261 { 00:10:05.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.261 "dma_device_type": 2 00:10:05.261 }, 00:10:05.261 { 00:10:05.261 "dma_device_id": "system", 00:10:05.261 "dma_device_type": 1 00:10:05.261 }, 00:10:05.261 { 00:10:05.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.261 "dma_device_type": 2 00:10:05.261 }, 00:10:05.261 { 00:10:05.261 "dma_device_id": "system", 00:10:05.261 "dma_device_type": 1 00:10:05.261 }, 00:10:05.261 { 00:10:05.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.261 "dma_device_type": 2 00:10:05.261 } 00:10:05.261 ], 00:10:05.261 "driver_specific": { 00:10:05.261 "raid": { 00:10:05.261 "uuid": "6843f1ff-518a-4e1b-acd2-5c9f1ca7a649", 00:10:05.261 "strip_size_kb": 64, 00:10:05.261 "state": "online", 00:10:05.261 "raid_level": "raid0", 00:10:05.261 "superblock": true, 00:10:05.261 "num_base_bdevs": 4, 00:10:05.261 "num_base_bdevs_discovered": 4, 00:10:05.261 "num_base_bdevs_operational": 4, 00:10:05.261 "base_bdevs_list": [ 00:10:05.261 { 00:10:05.261 "name": "pt1", 00:10:05.261 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:05.261 "is_configured": true, 00:10:05.261 "data_offset": 2048, 00:10:05.261 "data_size": 63488 00:10:05.261 }, 00:10:05.261 { 00:10:05.261 "name": "pt2", 00:10:05.261 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:05.261 "is_configured": true, 00:10:05.261 "data_offset": 2048, 00:10:05.261 "data_size": 63488 00:10:05.261 }, 00:10:05.261 { 00:10:05.261 "name": "pt3", 00:10:05.261 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:05.261 "is_configured": true, 00:10:05.261 "data_offset": 2048, 00:10:05.261 "data_size": 63488 00:10:05.261 }, 00:10:05.261 { 00:10:05.261 "name": "pt4", 00:10:05.261 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:05.261 "is_configured": true, 00:10:05.261 "data_offset": 2048, 00:10:05.261 "data_size": 63488 00:10:05.261 } 00:10:05.261 ] 00:10:05.261 } 00:10:05.261 } 00:10:05.261 }' 00:10:05.261 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:05.261 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:05.261 pt2 00:10:05.261 pt3 00:10:05.261 pt4' 00:10:05.261 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.520 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:05.520 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.520 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:05.520 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.520 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.520 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.520 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.520 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.520 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.520 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.520 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.520 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:05.520 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.520 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.520 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.520 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.520 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.520 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.520 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.520 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:05.520 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.520 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.520 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.520 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.520 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.520 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.520 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:05.520 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.520 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.520 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.520 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.520 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.520 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.520 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:05.520 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:05.520 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.520 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.520 [2024-12-13 04:26:05.490286] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:05.520 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.520 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6843f1ff-518a-4e1b-acd2-5c9f1ca7a649 00:10:05.520 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 6843f1ff-518a-4e1b-acd2-5c9f1ca7a649 ']' 00:10:05.520 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:05.520 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.520 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.780 [2024-12-13 04:26:05.537958] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:05.780 [2024-12-13 04:26:05.537990] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:05.780 [2024-12-13 04:26:05.538066] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:05.780 [2024-12-13 04:26:05.538158] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:05.780 [2024-12-13 04:26:05.538168] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:10:05.780 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.780 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.780 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:05.780 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.780 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.780 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.780 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:05.780 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:05.780 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:05.780 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:05.780 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.780 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.780 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.780 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:05.780 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:05.780 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.780 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.780 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.780 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:05.780 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:05.780 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.780 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.780 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.780 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:05.780 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:05.780 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.780 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.780 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.780 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:05.780 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.780 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:05.780 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.780 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.780 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:05.780 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:05.780 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:05.780 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:05.780 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:05.780 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:05.780 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:05.780 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:05.780 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:05.780 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.780 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.781 [2024-12-13 04:26:05.689716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:05.781 [2024-12-13 04:26:05.691818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:05.781 [2024-12-13 04:26:05.691864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:05.781 [2024-12-13 04:26:05.691892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:05.781 [2024-12-13 04:26:05.691956] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:05.781 [2024-12-13 04:26:05.692006] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:05.781 [2024-12-13 04:26:05.692026] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:05.781 [2024-12-13 04:26:05.692041] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:05.781 [2024-12-13 04:26:05.692054] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:05.781 [2024-12-13 04:26:05.692063] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:10:05.781 request: 00:10:05.781 { 00:10:05.781 "name": "raid_bdev1", 00:10:05.781 "raid_level": "raid0", 00:10:05.781 "base_bdevs": [ 00:10:05.781 "malloc1", 00:10:05.781 "malloc2", 00:10:05.781 "malloc3", 00:10:05.781 "malloc4" 00:10:05.781 ], 00:10:05.781 "strip_size_kb": 64, 00:10:05.781 "superblock": false, 00:10:05.781 "method": "bdev_raid_create", 00:10:05.781 "req_id": 1 00:10:05.781 } 00:10:05.781 Got JSON-RPC error response 00:10:05.781 response: 00:10:05.781 { 00:10:05.781 "code": -17, 00:10:05.781 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:05.781 } 00:10:05.781 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:05.781 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:05.781 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:05.781 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:05.781 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:05.781 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:05.781 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.781 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.781 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.781 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.781 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:05.781 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:05.781 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:05.781 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.781 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.781 [2024-12-13 04:26:05.741615] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:05.781 [2024-12-13 04:26:05.741663] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:05.781 [2024-12-13 04:26:05.741685] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:05.781 [2024-12-13 04:26:05.741694] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:05.781 [2024-12-13 04:26:05.744077] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:05.781 [2024-12-13 04:26:05.744111] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:05.781 [2024-12-13 04:26:05.744174] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:05.781 [2024-12-13 04:26:05.744214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:05.781 pt1 00:10:05.781 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.781 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:05.781 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:05.781 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.781 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:05.781 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.781 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:05.781 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.781 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.781 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.781 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.781 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.781 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:05.781 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.781 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.781 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.040 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.040 "name": "raid_bdev1", 00:10:06.040 "uuid": "6843f1ff-518a-4e1b-acd2-5c9f1ca7a649", 00:10:06.040 "strip_size_kb": 64, 00:10:06.040 "state": "configuring", 00:10:06.040 "raid_level": "raid0", 00:10:06.040 "superblock": true, 00:10:06.040 "num_base_bdevs": 4, 00:10:06.040 "num_base_bdevs_discovered": 1, 00:10:06.040 "num_base_bdevs_operational": 4, 00:10:06.040 "base_bdevs_list": [ 00:10:06.040 { 00:10:06.040 "name": "pt1", 00:10:06.040 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:06.040 "is_configured": true, 00:10:06.040 "data_offset": 2048, 00:10:06.040 "data_size": 63488 00:10:06.040 }, 00:10:06.040 { 00:10:06.040 "name": null, 00:10:06.040 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:06.040 "is_configured": false, 00:10:06.040 "data_offset": 2048, 00:10:06.040 "data_size": 63488 00:10:06.040 }, 00:10:06.040 { 00:10:06.040 "name": null, 00:10:06.040 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:06.040 "is_configured": false, 00:10:06.040 "data_offset": 2048, 00:10:06.040 "data_size": 63488 00:10:06.040 }, 00:10:06.040 { 00:10:06.040 "name": null, 00:10:06.040 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:06.040 "is_configured": false, 00:10:06.040 "data_offset": 2048, 00:10:06.040 "data_size": 63488 00:10:06.040 } 00:10:06.040 ] 00:10:06.040 }' 00:10:06.040 04:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.040 04:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.298 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:06.298 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:06.298 04:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.298 04:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.298 [2024-12-13 04:26:06.172874] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:06.298 [2024-12-13 04:26:06.172929] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:06.298 [2024-12-13 04:26:06.172949] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:06.298 [2024-12-13 04:26:06.172958] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:06.298 [2024-12-13 04:26:06.173362] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:06.298 [2024-12-13 04:26:06.173386] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:06.298 [2024-12-13 04:26:06.173472] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:06.298 [2024-12-13 04:26:06.173493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:06.298 pt2 00:10:06.298 04:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.298 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:06.298 04:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.298 04:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.298 [2024-12-13 04:26:06.184884] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:06.298 04:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.298 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:06.298 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:06.298 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.298 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:06.298 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.298 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.298 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.298 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.298 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.298 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.298 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.298 04:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.298 04:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.298 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:06.298 04:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.298 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.298 "name": "raid_bdev1", 00:10:06.298 "uuid": "6843f1ff-518a-4e1b-acd2-5c9f1ca7a649", 00:10:06.298 "strip_size_kb": 64, 00:10:06.298 "state": "configuring", 00:10:06.298 "raid_level": "raid0", 00:10:06.298 "superblock": true, 00:10:06.298 "num_base_bdevs": 4, 00:10:06.298 "num_base_bdevs_discovered": 1, 00:10:06.298 "num_base_bdevs_operational": 4, 00:10:06.298 "base_bdevs_list": [ 00:10:06.298 { 00:10:06.298 "name": "pt1", 00:10:06.298 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:06.298 "is_configured": true, 00:10:06.298 "data_offset": 2048, 00:10:06.298 "data_size": 63488 00:10:06.298 }, 00:10:06.298 { 00:10:06.298 "name": null, 00:10:06.298 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:06.298 "is_configured": false, 00:10:06.298 "data_offset": 0, 00:10:06.298 "data_size": 63488 00:10:06.298 }, 00:10:06.298 { 00:10:06.298 "name": null, 00:10:06.298 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:06.298 "is_configured": false, 00:10:06.298 "data_offset": 2048, 00:10:06.298 "data_size": 63488 00:10:06.298 }, 00:10:06.298 { 00:10:06.298 "name": null, 00:10:06.298 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:06.298 "is_configured": false, 00:10:06.298 "data_offset": 2048, 00:10:06.298 "data_size": 63488 00:10:06.298 } 00:10:06.298 ] 00:10:06.299 }' 00:10:06.299 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.299 04:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.867 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:06.867 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:06.867 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:06.867 04:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.867 04:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.867 [2024-12-13 04:26:06.648572] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:06.867 [2024-12-13 04:26:06.648673] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:06.867 [2024-12-13 04:26:06.648710] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:06.867 [2024-12-13 04:26:06.648741] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:06.867 [2024-12-13 04:26:06.649138] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:06.867 [2024-12-13 04:26:06.649194] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:06.867 [2024-12-13 04:26:06.649280] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:06.867 [2024-12-13 04:26:06.649337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:06.867 pt2 00:10:06.867 04:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.867 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:06.867 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:06.867 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:06.867 04:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.867 04:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.867 [2024-12-13 04:26:06.660548] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:06.867 [2024-12-13 04:26:06.660634] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:06.867 [2024-12-13 04:26:06.660664] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:06.867 [2024-12-13 04:26:06.660692] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:06.867 [2024-12-13 04:26:06.661042] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:06.867 [2024-12-13 04:26:06.661098] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:06.867 [2024-12-13 04:26:06.661171] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:06.867 [2024-12-13 04:26:06.661217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:06.867 pt3 00:10:06.867 04:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.867 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:06.867 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:06.867 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:06.867 04:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.867 04:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.867 [2024-12-13 04:26:06.672550] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:06.867 [2024-12-13 04:26:06.672618] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:06.867 [2024-12-13 04:26:06.672630] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:06.867 [2024-12-13 04:26:06.672641] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:06.867 [2024-12-13 04:26:06.672955] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:06.868 [2024-12-13 04:26:06.672973] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:06.868 [2024-12-13 04:26:06.673022] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:06.868 [2024-12-13 04:26:06.673041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:06.868 [2024-12-13 04:26:06.673138] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:10:06.868 [2024-12-13 04:26:06.673149] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:06.868 [2024-12-13 04:26:06.673397] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:10:06.868 [2024-12-13 04:26:06.673535] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:10:06.868 [2024-12-13 04:26:06.673545] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:10:06.868 [2024-12-13 04:26:06.673643] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:06.868 pt4 00:10:06.868 04:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.868 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:06.868 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:06.868 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:06.868 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:06.868 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:06.868 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:06.868 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.868 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.868 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.868 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.868 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.868 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.868 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.868 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:06.868 04:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.868 04:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.868 04:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.868 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.868 "name": "raid_bdev1", 00:10:06.868 "uuid": "6843f1ff-518a-4e1b-acd2-5c9f1ca7a649", 00:10:06.868 "strip_size_kb": 64, 00:10:06.868 "state": "online", 00:10:06.868 "raid_level": "raid0", 00:10:06.868 "superblock": true, 00:10:06.868 "num_base_bdevs": 4, 00:10:06.868 "num_base_bdevs_discovered": 4, 00:10:06.868 "num_base_bdevs_operational": 4, 00:10:06.868 "base_bdevs_list": [ 00:10:06.868 { 00:10:06.868 "name": "pt1", 00:10:06.868 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:06.868 "is_configured": true, 00:10:06.868 "data_offset": 2048, 00:10:06.868 "data_size": 63488 00:10:06.868 }, 00:10:06.868 { 00:10:06.868 "name": "pt2", 00:10:06.868 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:06.868 "is_configured": true, 00:10:06.868 "data_offset": 2048, 00:10:06.868 "data_size": 63488 00:10:06.868 }, 00:10:06.868 { 00:10:06.868 "name": "pt3", 00:10:06.868 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:06.868 "is_configured": true, 00:10:06.868 "data_offset": 2048, 00:10:06.868 "data_size": 63488 00:10:06.868 }, 00:10:06.868 { 00:10:06.868 "name": "pt4", 00:10:06.868 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:06.868 "is_configured": true, 00:10:06.868 "data_offset": 2048, 00:10:06.868 "data_size": 63488 00:10:06.868 } 00:10:06.868 ] 00:10:06.868 }' 00:10:06.868 04:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.868 04:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.127 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:07.127 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:07.127 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:07.127 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:07.127 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:07.127 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:07.127 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:07.127 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:07.128 04:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.128 04:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.128 [2024-12-13 04:26:07.080873] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:07.128 04:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.128 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:07.128 "name": "raid_bdev1", 00:10:07.128 "aliases": [ 00:10:07.128 "6843f1ff-518a-4e1b-acd2-5c9f1ca7a649" 00:10:07.128 ], 00:10:07.128 "product_name": "Raid Volume", 00:10:07.128 "block_size": 512, 00:10:07.128 "num_blocks": 253952, 00:10:07.128 "uuid": "6843f1ff-518a-4e1b-acd2-5c9f1ca7a649", 00:10:07.128 "assigned_rate_limits": { 00:10:07.128 "rw_ios_per_sec": 0, 00:10:07.128 "rw_mbytes_per_sec": 0, 00:10:07.128 "r_mbytes_per_sec": 0, 00:10:07.128 "w_mbytes_per_sec": 0 00:10:07.128 }, 00:10:07.128 "claimed": false, 00:10:07.128 "zoned": false, 00:10:07.128 "supported_io_types": { 00:10:07.128 "read": true, 00:10:07.128 "write": true, 00:10:07.128 "unmap": true, 00:10:07.128 "flush": true, 00:10:07.128 "reset": true, 00:10:07.128 "nvme_admin": false, 00:10:07.128 "nvme_io": false, 00:10:07.128 "nvme_io_md": false, 00:10:07.128 "write_zeroes": true, 00:10:07.128 "zcopy": false, 00:10:07.128 "get_zone_info": false, 00:10:07.128 "zone_management": false, 00:10:07.128 "zone_append": false, 00:10:07.128 "compare": false, 00:10:07.128 "compare_and_write": false, 00:10:07.128 "abort": false, 00:10:07.128 "seek_hole": false, 00:10:07.128 "seek_data": false, 00:10:07.128 "copy": false, 00:10:07.128 "nvme_iov_md": false 00:10:07.128 }, 00:10:07.128 "memory_domains": [ 00:10:07.128 { 00:10:07.128 "dma_device_id": "system", 00:10:07.128 "dma_device_type": 1 00:10:07.128 }, 00:10:07.128 { 00:10:07.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.128 "dma_device_type": 2 00:10:07.128 }, 00:10:07.128 { 00:10:07.128 "dma_device_id": "system", 00:10:07.128 "dma_device_type": 1 00:10:07.128 }, 00:10:07.128 { 00:10:07.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.128 "dma_device_type": 2 00:10:07.128 }, 00:10:07.128 { 00:10:07.128 "dma_device_id": "system", 00:10:07.128 "dma_device_type": 1 00:10:07.128 }, 00:10:07.128 { 00:10:07.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.128 "dma_device_type": 2 00:10:07.128 }, 00:10:07.128 { 00:10:07.128 "dma_device_id": "system", 00:10:07.128 "dma_device_type": 1 00:10:07.128 }, 00:10:07.128 { 00:10:07.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.128 "dma_device_type": 2 00:10:07.128 } 00:10:07.128 ], 00:10:07.128 "driver_specific": { 00:10:07.128 "raid": { 00:10:07.128 "uuid": "6843f1ff-518a-4e1b-acd2-5c9f1ca7a649", 00:10:07.128 "strip_size_kb": 64, 00:10:07.128 "state": "online", 00:10:07.128 "raid_level": "raid0", 00:10:07.128 "superblock": true, 00:10:07.128 "num_base_bdevs": 4, 00:10:07.128 "num_base_bdevs_discovered": 4, 00:10:07.128 "num_base_bdevs_operational": 4, 00:10:07.128 "base_bdevs_list": [ 00:10:07.128 { 00:10:07.128 "name": "pt1", 00:10:07.128 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:07.128 "is_configured": true, 00:10:07.128 "data_offset": 2048, 00:10:07.128 "data_size": 63488 00:10:07.128 }, 00:10:07.128 { 00:10:07.128 "name": "pt2", 00:10:07.128 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:07.128 "is_configured": true, 00:10:07.128 "data_offset": 2048, 00:10:07.128 "data_size": 63488 00:10:07.128 }, 00:10:07.128 { 00:10:07.128 "name": "pt3", 00:10:07.128 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:07.128 "is_configured": true, 00:10:07.128 "data_offset": 2048, 00:10:07.128 "data_size": 63488 00:10:07.128 }, 00:10:07.128 { 00:10:07.128 "name": "pt4", 00:10:07.128 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:07.128 "is_configured": true, 00:10:07.128 "data_offset": 2048, 00:10:07.128 "data_size": 63488 00:10:07.128 } 00:10:07.128 ] 00:10:07.128 } 00:10:07.128 } 00:10:07.128 }' 00:10:07.128 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:07.395 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:07.395 pt2 00:10:07.395 pt3 00:10:07.395 pt4' 00:10:07.395 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.395 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:07.395 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.395 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:07.395 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.395 04:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.395 04:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.395 04:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.395 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.395 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.395 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.395 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.395 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:07.395 04:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.395 04:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.395 04:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.395 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.395 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.395 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.395 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.395 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:07.395 04:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.395 04:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.395 04:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.395 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.395 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.395 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.395 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:07.395 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.395 04:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.395 04:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.395 04:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.395 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.395 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.395 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:07.395 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:07.395 04:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.395 04:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.395 [2024-12-13 04:26:07.376843] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:07.395 04:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.656 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 6843f1ff-518a-4e1b-acd2-5c9f1ca7a649 '!=' 6843f1ff-518a-4e1b-acd2-5c9f1ca7a649 ']' 00:10:07.656 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:07.656 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:07.656 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:07.656 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83335 00:10:07.656 04:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 83335 ']' 00:10:07.656 04:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 83335 00:10:07.656 04:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:07.656 04:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:07.656 04:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83335 00:10:07.656 04:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:07.656 killing process with pid 83335 00:10:07.656 04:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:07.656 04:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83335' 00:10:07.656 04:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 83335 00:10:07.656 [2024-12-13 04:26:07.460296] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:07.656 [2024-12-13 04:26:07.460384] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:07.656 [2024-12-13 04:26:07.460472] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:07.656 [2024-12-13 04:26:07.460485] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:10:07.656 04:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 83335 00:10:07.656 [2024-12-13 04:26:07.540081] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:07.915 04:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:07.915 00:10:07.915 real 0m4.223s 00:10:07.915 user 0m6.521s 00:10:07.915 sys 0m0.996s 00:10:07.915 ************************************ 00:10:07.915 END TEST raid_superblock_test 00:10:07.915 ************************************ 00:10:07.915 04:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.915 04:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.915 04:26:07 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:10:07.915 04:26:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:07.915 04:26:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.915 04:26:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:08.175 ************************************ 00:10:08.175 START TEST raid_read_error_test 00:10:08.175 ************************************ 00:10:08.175 04:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:10:08.175 04:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:08.175 04:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:08.175 04:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:08.175 04:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:08.175 04:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:08.175 04:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:08.175 04:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:08.175 04:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:08.175 04:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:08.175 04:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:08.175 04:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:08.175 04:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:08.175 04:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:08.175 04:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:08.175 04:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:08.175 04:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:08.175 04:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:08.175 04:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:08.175 04:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:08.175 04:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:08.175 04:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:08.175 04:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:08.175 04:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:08.175 04:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:08.175 04:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:08.175 04:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:08.175 04:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:08.175 04:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:08.175 04:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Gq7YVjivF8 00:10:08.175 04:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=83587 00:10:08.175 04:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:08.175 04:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 83587 00:10:08.175 04:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 83587 ']' 00:10:08.175 04:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.175 04:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:08.175 04:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.175 04:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:08.175 04:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.175 [2024-12-13 04:26:08.050066] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:10:08.175 [2024-12-13 04:26:08.050258] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83587 ] 00:10:08.434 [2024-12-13 04:26:08.204593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.434 [2024-12-13 04:26:08.243004] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.434 [2024-12-13 04:26:08.318924] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:08.434 [2024-12-13 04:26:08.319057] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:09.003 04:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:09.003 04:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:09.003 04:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:09.003 04:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:09.003 04:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.003 04:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.003 BaseBdev1_malloc 00:10:09.003 04:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.003 04:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:09.003 04:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.003 04:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.003 true 00:10:09.003 04:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.003 04:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:09.003 04:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.003 04:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.003 [2024-12-13 04:26:08.907950] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:09.003 [2024-12-13 04:26:08.908032] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:09.003 [2024-12-13 04:26:08.908059] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:10:09.003 [2024-12-13 04:26:08.908068] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:09.003 [2024-12-13 04:26:08.910564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:09.003 [2024-12-13 04:26:08.910605] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:09.003 BaseBdev1 00:10:09.003 04:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.003 04:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:09.003 04:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:09.003 04:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.003 04:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.003 BaseBdev2_malloc 00:10:09.003 04:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.003 04:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:09.003 04:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.003 04:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.003 true 00:10:09.003 04:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.003 04:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:09.003 04:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.003 04:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.003 [2024-12-13 04:26:08.954399] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:09.003 [2024-12-13 04:26:08.954479] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:09.003 [2024-12-13 04:26:08.954502] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:10:09.003 [2024-12-13 04:26:08.954520] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:09.003 [2024-12-13 04:26:08.956860] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:09.003 [2024-12-13 04:26:08.956896] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:09.003 BaseBdev2 00:10:09.003 04:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.003 04:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:09.003 04:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:09.003 04:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.003 04:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.003 BaseBdev3_malloc 00:10:09.003 04:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.003 04:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:09.003 04:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.003 04:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.003 true 00:10:09.003 04:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.003 04:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:09.003 04:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.003 04:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.003 [2024-12-13 04:26:09.000858] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:09.003 [2024-12-13 04:26:09.000904] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:09.003 [2024-12-13 04:26:09.000943] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:10:09.003 [2024-12-13 04:26:09.000952] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:09.003 [2024-12-13 04:26:09.003278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:09.003 [2024-12-13 04:26:09.003314] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:09.003 BaseBdev3 00:10:09.003 04:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.003 04:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:09.003 04:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:09.003 04:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.003 04:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.263 BaseBdev4_malloc 00:10:09.263 04:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.263 04:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:09.263 04:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.263 04:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.263 true 00:10:09.263 04:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.263 04:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:09.263 04:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.263 04:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.263 [2024-12-13 04:26:09.058618] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:09.263 [2024-12-13 04:26:09.058662] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:09.263 [2024-12-13 04:26:09.058688] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:09.263 [2024-12-13 04:26:09.058697] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:09.263 [2024-12-13 04:26:09.060977] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:09.263 [2024-12-13 04:26:09.061014] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:09.263 BaseBdev4 00:10:09.263 04:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.263 04:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:09.263 04:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.263 04:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.263 [2024-12-13 04:26:09.070645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:09.263 [2024-12-13 04:26:09.072721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:09.263 [2024-12-13 04:26:09.072800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:09.263 [2024-12-13 04:26:09.072851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:09.263 [2024-12-13 04:26:09.073055] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:10:09.263 [2024-12-13 04:26:09.073071] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:09.263 [2024-12-13 04:26:09.073322] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002ef0 00:10:09.263 [2024-12-13 04:26:09.073526] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:10:09.263 [2024-12-13 04:26:09.073542] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:10:09.263 [2024-12-13 04:26:09.073656] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:09.263 04:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.263 04:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:09.263 04:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:09.263 04:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:09.263 04:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:09.263 04:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.263 04:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:09.263 04:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.263 04:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.263 04:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.263 04:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.263 04:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.263 04:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.263 04:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:09.263 04:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.263 04:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.263 04:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.263 "name": "raid_bdev1", 00:10:09.263 "uuid": "f2b1c8e0-ab65-4a9b-a7fa-969edece84d9", 00:10:09.263 "strip_size_kb": 64, 00:10:09.263 "state": "online", 00:10:09.263 "raid_level": "raid0", 00:10:09.263 "superblock": true, 00:10:09.263 "num_base_bdevs": 4, 00:10:09.263 "num_base_bdevs_discovered": 4, 00:10:09.263 "num_base_bdevs_operational": 4, 00:10:09.263 "base_bdevs_list": [ 00:10:09.263 { 00:10:09.263 "name": "BaseBdev1", 00:10:09.263 "uuid": "1e0990e5-662e-5fc3-a8c1-0b781630717a", 00:10:09.263 "is_configured": true, 00:10:09.263 "data_offset": 2048, 00:10:09.263 "data_size": 63488 00:10:09.263 }, 00:10:09.263 { 00:10:09.263 "name": "BaseBdev2", 00:10:09.263 "uuid": "767a19f8-f314-5289-a546-3c93c80e9c79", 00:10:09.263 "is_configured": true, 00:10:09.263 "data_offset": 2048, 00:10:09.263 "data_size": 63488 00:10:09.263 }, 00:10:09.263 { 00:10:09.263 "name": "BaseBdev3", 00:10:09.263 "uuid": "dd4d1c3a-0855-5c07-ab5c-052e281b2fbb", 00:10:09.263 "is_configured": true, 00:10:09.263 "data_offset": 2048, 00:10:09.263 "data_size": 63488 00:10:09.263 }, 00:10:09.263 { 00:10:09.263 "name": "BaseBdev4", 00:10:09.263 "uuid": "408b1732-0d59-5f92-8fc1-5fe912b5b38c", 00:10:09.263 "is_configured": true, 00:10:09.263 "data_offset": 2048, 00:10:09.263 "data_size": 63488 00:10:09.263 } 00:10:09.264 ] 00:10:09.264 }' 00:10:09.264 04:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.264 04:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.523 04:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:09.523 04:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:09.782 [2024-12-13 04:26:09.614162] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000003090 00:10:10.718 04:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:10.718 04:26:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.718 04:26:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.718 04:26:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.718 04:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:10.718 04:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:10.718 04:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:10.718 04:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:10.718 04:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:10.718 04:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:10.718 04:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:10.718 04:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.718 04:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.718 04:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.718 04:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.718 04:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.718 04:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.718 04:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.718 04:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:10.718 04:26:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.718 04:26:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.718 04:26:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.718 04:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.718 "name": "raid_bdev1", 00:10:10.718 "uuid": "f2b1c8e0-ab65-4a9b-a7fa-969edece84d9", 00:10:10.718 "strip_size_kb": 64, 00:10:10.718 "state": "online", 00:10:10.718 "raid_level": "raid0", 00:10:10.718 "superblock": true, 00:10:10.718 "num_base_bdevs": 4, 00:10:10.718 "num_base_bdevs_discovered": 4, 00:10:10.718 "num_base_bdevs_operational": 4, 00:10:10.718 "base_bdevs_list": [ 00:10:10.718 { 00:10:10.718 "name": "BaseBdev1", 00:10:10.718 "uuid": "1e0990e5-662e-5fc3-a8c1-0b781630717a", 00:10:10.718 "is_configured": true, 00:10:10.718 "data_offset": 2048, 00:10:10.718 "data_size": 63488 00:10:10.718 }, 00:10:10.718 { 00:10:10.718 "name": "BaseBdev2", 00:10:10.718 "uuid": "767a19f8-f314-5289-a546-3c93c80e9c79", 00:10:10.718 "is_configured": true, 00:10:10.718 "data_offset": 2048, 00:10:10.718 "data_size": 63488 00:10:10.718 }, 00:10:10.718 { 00:10:10.718 "name": "BaseBdev3", 00:10:10.718 "uuid": "dd4d1c3a-0855-5c07-ab5c-052e281b2fbb", 00:10:10.718 "is_configured": true, 00:10:10.718 "data_offset": 2048, 00:10:10.718 "data_size": 63488 00:10:10.718 }, 00:10:10.719 { 00:10:10.719 "name": "BaseBdev4", 00:10:10.719 "uuid": "408b1732-0d59-5f92-8fc1-5fe912b5b38c", 00:10:10.719 "is_configured": true, 00:10:10.719 "data_offset": 2048, 00:10:10.719 "data_size": 63488 00:10:10.719 } 00:10:10.719 ] 00:10:10.719 }' 00:10:10.719 04:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.719 04:26:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.978 04:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:10.978 04:26:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.978 04:26:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.237 [2024-12-13 04:26:10.994306] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:11.237 [2024-12-13 04:26:10.994345] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:11.237 [2024-12-13 04:26:10.996928] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:11.237 [2024-12-13 04:26:10.996987] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:11.237 [2024-12-13 04:26:10.997039] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:11.237 [2024-12-13 04:26:10.997055] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:10:11.237 { 00:10:11.237 "results": [ 00:10:11.237 { 00:10:11.237 "job": "raid_bdev1", 00:10:11.237 "core_mask": "0x1", 00:10:11.237 "workload": "randrw", 00:10:11.237 "percentage": 50, 00:10:11.238 "status": "finished", 00:10:11.238 "queue_depth": 1, 00:10:11.238 "io_size": 131072, 00:10:11.238 "runtime": 1.380878, 00:10:11.238 "iops": 14449.502418026792, 00:10:11.238 "mibps": 1806.187802253349, 00:10:11.238 "io_failed": 1, 00:10:11.238 "io_timeout": 0, 00:10:11.238 "avg_latency_us": 97.00738808429693, 00:10:11.238 "min_latency_us": 25.152838427947597, 00:10:11.238 "max_latency_us": 1294.9799126637554 00:10:11.238 } 00:10:11.238 ], 00:10:11.238 "core_count": 1 00:10:11.238 } 00:10:11.238 04:26:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.238 04:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 83587 00:10:11.238 04:26:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 83587 ']' 00:10:11.238 04:26:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 83587 00:10:11.238 04:26:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:11.238 04:26:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:11.238 04:26:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83587 00:10:11.238 04:26:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:11.238 04:26:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:11.238 04:26:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83587' 00:10:11.238 killing process with pid 83587 00:10:11.238 04:26:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 83587 00:10:11.238 [2024-12-13 04:26:11.042928] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:11.238 04:26:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 83587 00:10:11.238 [2024-12-13 04:26:11.108422] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:11.497 04:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Gq7YVjivF8 00:10:11.497 04:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:11.497 04:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:11.497 04:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:11.497 04:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:11.497 04:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:11.497 04:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:11.497 04:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:11.497 00:10:11.497 real 0m3.502s 00:10:11.497 user 0m4.272s 00:10:11.497 sys 0m0.655s 00:10:11.497 04:26:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.497 ************************************ 00:10:11.497 END TEST raid_read_error_test 00:10:11.497 ************************************ 00:10:11.497 04:26:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.497 04:26:11 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:10:11.497 04:26:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:11.497 04:26:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.497 04:26:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:11.757 ************************************ 00:10:11.757 START TEST raid_write_error_test 00:10:11.757 ************************************ 00:10:11.757 04:26:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:10:11.757 04:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:11.757 04:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:11.757 04:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:11.757 04:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:11.757 04:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:11.757 04:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:11.757 04:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:11.757 04:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:11.757 04:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:11.757 04:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:11.757 04:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:11.757 04:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:11.757 04:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:11.757 04:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:11.757 04:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:11.757 04:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:11.757 04:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:11.757 04:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:11.757 04:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:11.757 04:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:11.757 04:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:11.757 04:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:11.757 04:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:11.757 04:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:11.757 04:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:11.757 04:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:11.757 04:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:11.757 04:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:11.757 04:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.fT1Lh7N5uv 00:10:11.757 04:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=83723 00:10:11.757 04:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:11.757 04:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 83723 00:10:11.757 04:26:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 83723 ']' 00:10:11.757 04:26:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.757 04:26:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:11.757 04:26:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.757 04:26:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:11.757 04:26:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.757 [2024-12-13 04:26:11.626829] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:10:11.757 [2024-12-13 04:26:11.627025] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83723 ] 00:10:12.017 [2024-12-13 04:26:11.781702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.017 [2024-12-13 04:26:11.821023] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.017 [2024-12-13 04:26:11.897729] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:12.017 [2024-12-13 04:26:11.897769] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:12.586 04:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:12.586 04:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:12.586 04:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:12.586 04:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:12.586 04:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.586 04:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.586 BaseBdev1_malloc 00:10:12.586 04:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.586 04:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:12.586 04:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.586 04:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.586 true 00:10:12.586 04:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.586 04:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:12.586 04:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.586 04:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.586 [2024-12-13 04:26:12.502774] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:12.586 [2024-12-13 04:26:12.502847] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.586 [2024-12-13 04:26:12.502890] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:10:12.586 [2024-12-13 04:26:12.502910] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.586 [2024-12-13 04:26:12.505376] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.586 [2024-12-13 04:26:12.505459] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:12.586 BaseBdev1 00:10:12.586 04:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.586 04:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:12.586 04:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:12.586 04:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.586 04:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.586 BaseBdev2_malloc 00:10:12.586 04:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.586 04:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:12.586 04:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.586 04:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.586 true 00:10:12.586 04:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.586 04:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:12.586 04:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.586 04:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.586 [2024-12-13 04:26:12.549163] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:12.586 [2024-12-13 04:26:12.549218] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.586 [2024-12-13 04:26:12.549256] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:10:12.586 [2024-12-13 04:26:12.549275] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.586 [2024-12-13 04:26:12.551619] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.586 [2024-12-13 04:26:12.551655] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:12.586 BaseBdev2 00:10:12.586 04:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.586 04:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:12.586 04:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:12.586 04:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.586 04:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.586 BaseBdev3_malloc 00:10:12.586 04:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.586 04:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:12.586 04:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.586 04:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.586 true 00:10:12.586 04:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.586 04:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:12.586 04:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.586 04:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.586 [2024-12-13 04:26:12.595769] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:12.586 [2024-12-13 04:26:12.595858] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.586 [2024-12-13 04:26:12.595901] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:10:12.586 [2024-12-13 04:26:12.595910] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.586 [2024-12-13 04:26:12.598226] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.586 [2024-12-13 04:26:12.598263] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:12.846 BaseBdev3 00:10:12.846 04:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.846 04:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:12.846 04:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:12.846 04:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.846 04:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.846 BaseBdev4_malloc 00:10:12.846 04:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.846 04:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:12.846 04:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.846 04:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.846 true 00:10:12.846 04:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.846 04:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:12.846 04:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.846 04:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.846 [2024-12-13 04:26:12.654250] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:12.846 [2024-12-13 04:26:12.654313] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.846 [2024-12-13 04:26:12.654342] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:12.846 [2024-12-13 04:26:12.654351] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.846 [2024-12-13 04:26:12.656823] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.846 [2024-12-13 04:26:12.656900] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:12.846 BaseBdev4 00:10:12.846 04:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.846 04:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:12.846 04:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.846 04:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.846 [2024-12-13 04:26:12.666297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:12.846 [2024-12-13 04:26:12.668471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:12.846 [2024-12-13 04:26:12.668593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:12.846 [2024-12-13 04:26:12.668670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:12.846 [2024-12-13 04:26:12.668938] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:10:12.846 [2024-12-13 04:26:12.668986] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:12.846 [2024-12-13 04:26:12.669288] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002ef0 00:10:12.846 [2024-12-13 04:26:12.669497] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:10:12.846 [2024-12-13 04:26:12.669543] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:10:12.846 [2024-12-13 04:26:12.669714] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:12.846 04:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.846 04:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:12.846 04:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:12.846 04:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:12.847 04:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:12.847 04:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.847 04:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.847 04:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.847 04:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.847 04:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.847 04:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.847 04:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.847 04:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.847 04:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:12.847 04:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.847 04:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.847 04:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.847 "name": "raid_bdev1", 00:10:12.847 "uuid": "5b861faa-3c01-4e85-bee0-f8a82ed150ee", 00:10:12.847 "strip_size_kb": 64, 00:10:12.847 "state": "online", 00:10:12.847 "raid_level": "raid0", 00:10:12.847 "superblock": true, 00:10:12.847 "num_base_bdevs": 4, 00:10:12.847 "num_base_bdevs_discovered": 4, 00:10:12.847 "num_base_bdevs_operational": 4, 00:10:12.847 "base_bdevs_list": [ 00:10:12.847 { 00:10:12.847 "name": "BaseBdev1", 00:10:12.847 "uuid": "b9e73f36-3c2a-5a63-9743-2649df5fece7", 00:10:12.847 "is_configured": true, 00:10:12.847 "data_offset": 2048, 00:10:12.847 "data_size": 63488 00:10:12.847 }, 00:10:12.847 { 00:10:12.847 "name": "BaseBdev2", 00:10:12.847 "uuid": "723c2cc7-2b13-577c-be0f-0080309142f6", 00:10:12.847 "is_configured": true, 00:10:12.847 "data_offset": 2048, 00:10:12.847 "data_size": 63488 00:10:12.847 }, 00:10:12.847 { 00:10:12.847 "name": "BaseBdev3", 00:10:12.847 "uuid": "c558466e-180e-5414-a149-a746fafc712c", 00:10:12.847 "is_configured": true, 00:10:12.847 "data_offset": 2048, 00:10:12.847 "data_size": 63488 00:10:12.847 }, 00:10:12.847 { 00:10:12.847 "name": "BaseBdev4", 00:10:12.847 "uuid": "0a1eeb4d-3214-52d6-8f7f-d8b32ded7245", 00:10:12.847 "is_configured": true, 00:10:12.847 "data_offset": 2048, 00:10:12.847 "data_size": 63488 00:10:12.847 } 00:10:12.847 ] 00:10:12.847 }' 00:10:12.847 04:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.847 04:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.106 04:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:13.106 04:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:13.365 [2024-12-13 04:26:13.169853] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000003090 00:10:14.302 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:14.303 04:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.303 04:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.303 04:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.303 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:14.303 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:14.303 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:14.303 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:14.303 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:14.303 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:14.303 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:14.303 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.303 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.303 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.303 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.303 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.303 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.303 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.303 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:14.303 04:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.303 04:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.303 04:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.303 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.303 "name": "raid_bdev1", 00:10:14.303 "uuid": "5b861faa-3c01-4e85-bee0-f8a82ed150ee", 00:10:14.303 "strip_size_kb": 64, 00:10:14.303 "state": "online", 00:10:14.303 "raid_level": "raid0", 00:10:14.303 "superblock": true, 00:10:14.303 "num_base_bdevs": 4, 00:10:14.303 "num_base_bdevs_discovered": 4, 00:10:14.303 "num_base_bdevs_operational": 4, 00:10:14.303 "base_bdevs_list": [ 00:10:14.303 { 00:10:14.303 "name": "BaseBdev1", 00:10:14.303 "uuid": "b9e73f36-3c2a-5a63-9743-2649df5fece7", 00:10:14.303 "is_configured": true, 00:10:14.303 "data_offset": 2048, 00:10:14.303 "data_size": 63488 00:10:14.303 }, 00:10:14.303 { 00:10:14.303 "name": "BaseBdev2", 00:10:14.303 "uuid": "723c2cc7-2b13-577c-be0f-0080309142f6", 00:10:14.303 "is_configured": true, 00:10:14.303 "data_offset": 2048, 00:10:14.303 "data_size": 63488 00:10:14.303 }, 00:10:14.303 { 00:10:14.303 "name": "BaseBdev3", 00:10:14.303 "uuid": "c558466e-180e-5414-a149-a746fafc712c", 00:10:14.303 "is_configured": true, 00:10:14.303 "data_offset": 2048, 00:10:14.303 "data_size": 63488 00:10:14.303 }, 00:10:14.303 { 00:10:14.303 "name": "BaseBdev4", 00:10:14.303 "uuid": "0a1eeb4d-3214-52d6-8f7f-d8b32ded7245", 00:10:14.303 "is_configured": true, 00:10:14.303 "data_offset": 2048, 00:10:14.303 "data_size": 63488 00:10:14.303 } 00:10:14.303 ] 00:10:14.303 }' 00:10:14.303 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.303 04:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.562 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:14.562 04:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.562 04:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.562 [2024-12-13 04:26:14.502379] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:14.562 [2024-12-13 04:26:14.502521] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:14.562 [2024-12-13 04:26:14.505209] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:14.562 [2024-12-13 04:26:14.505307] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:14.562 [2024-12-13 04:26:14.505377] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:14.562 [2024-12-13 04:26:14.505426] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:10:14.562 { 00:10:14.562 "results": [ 00:10:14.562 { 00:10:14.562 "job": "raid_bdev1", 00:10:14.562 "core_mask": "0x1", 00:10:14.562 "workload": "randrw", 00:10:14.562 "percentage": 50, 00:10:14.562 "status": "finished", 00:10:14.562 "queue_depth": 1, 00:10:14.562 "io_size": 131072, 00:10:14.562 "runtime": 1.333261, 00:10:14.562 "iops": 14442.033480316308, 00:10:14.562 "mibps": 1805.2541850395385, 00:10:14.562 "io_failed": 1, 00:10:14.562 "io_timeout": 0, 00:10:14.562 "avg_latency_us": 97.08747739036254, 00:10:14.562 "min_latency_us": 25.152838427947597, 00:10:14.562 "max_latency_us": 1359.3711790393013 00:10:14.562 } 00:10:14.562 ], 00:10:14.562 "core_count": 1 00:10:14.562 } 00:10:14.562 04:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.562 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 83723 00:10:14.562 04:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 83723 ']' 00:10:14.562 04:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 83723 00:10:14.562 04:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:14.562 04:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:14.562 04:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83723 00:10:14.562 04:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:14.562 04:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:14.562 04:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83723' 00:10:14.562 killing process with pid 83723 00:10:14.562 04:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 83723 00:10:14.562 [2024-12-13 04:26:14.535541] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:14.562 04:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 83723 00:10:14.837 [2024-12-13 04:26:14.599532] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:15.137 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.fT1Lh7N5uv 00:10:15.137 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:15.137 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:15.137 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:10:15.137 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:15.137 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:15.137 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:15.137 04:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:10:15.137 00:10:15.137 real 0m3.413s 00:10:15.137 user 0m4.086s 00:10:15.137 sys 0m0.624s 00:10:15.137 04:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:15.137 04:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.137 ************************************ 00:10:15.137 END TEST raid_write_error_test 00:10:15.137 ************************************ 00:10:15.137 04:26:14 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:15.137 04:26:14 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:10:15.137 04:26:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:15.137 04:26:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:15.137 04:26:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:15.137 ************************************ 00:10:15.137 START TEST raid_state_function_test 00:10:15.137 ************************************ 00:10:15.137 04:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:10:15.137 04:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:15.137 04:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:15.137 04:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:15.137 04:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:15.137 04:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:15.137 04:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:15.137 04:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:15.137 04:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:15.137 04:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:15.137 04:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:15.137 04:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:15.137 04:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:15.137 04:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:15.137 04:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:15.137 04:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:15.137 04:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:15.137 04:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:15.137 04:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:15.137 04:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:15.137 04:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:15.137 04:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:15.137 04:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:15.137 04:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:15.137 04:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:15.137 04:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:15.137 04:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:15.137 04:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:15.137 04:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:15.137 04:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:15.137 04:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83850 00:10:15.137 04:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:15.137 04:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83850' 00:10:15.137 Process raid pid: 83850 00:10:15.137 04:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83850 00:10:15.137 04:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 83850 ']' 00:10:15.137 04:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.137 04:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:15.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.138 04:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.138 04:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:15.138 04:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.138 [2024-12-13 04:26:15.122593] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:10:15.138 [2024-12-13 04:26:15.122808] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:15.396 [2024-12-13 04:26:15.277940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.396 [2024-12-13 04:26:15.316386] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.396 [2024-12-13 04:26:15.391684] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:15.396 [2024-12-13 04:26:15.391826] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:15.963 04:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:15.963 04:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:15.963 04:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:15.963 04:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.963 04:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.963 [2024-12-13 04:26:15.977266] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:15.963 [2024-12-13 04:26:15.977332] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:15.963 [2024-12-13 04:26:15.977343] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:15.963 [2024-12-13 04:26:15.977354] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:15.963 [2024-12-13 04:26:15.977360] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:15.963 [2024-12-13 04:26:15.977371] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:15.963 [2024-12-13 04:26:15.977377] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:15.963 [2024-12-13 04:26:15.977386] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:16.222 04:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.222 04:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:16.222 04:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.222 04:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.222 04:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:16.222 04:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.222 04:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:16.222 04:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.222 04:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.222 04:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.222 04:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.222 04:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.222 04:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.222 04:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.222 04:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.222 04:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.222 04:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.222 "name": "Existed_Raid", 00:10:16.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.222 "strip_size_kb": 64, 00:10:16.222 "state": "configuring", 00:10:16.222 "raid_level": "concat", 00:10:16.222 "superblock": false, 00:10:16.222 "num_base_bdevs": 4, 00:10:16.222 "num_base_bdevs_discovered": 0, 00:10:16.222 "num_base_bdevs_operational": 4, 00:10:16.222 "base_bdevs_list": [ 00:10:16.222 { 00:10:16.222 "name": "BaseBdev1", 00:10:16.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.222 "is_configured": false, 00:10:16.222 "data_offset": 0, 00:10:16.222 "data_size": 0 00:10:16.222 }, 00:10:16.222 { 00:10:16.222 "name": "BaseBdev2", 00:10:16.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.222 "is_configured": false, 00:10:16.222 "data_offset": 0, 00:10:16.222 "data_size": 0 00:10:16.222 }, 00:10:16.222 { 00:10:16.222 "name": "BaseBdev3", 00:10:16.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.222 "is_configured": false, 00:10:16.222 "data_offset": 0, 00:10:16.222 "data_size": 0 00:10:16.222 }, 00:10:16.222 { 00:10:16.222 "name": "BaseBdev4", 00:10:16.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.222 "is_configured": false, 00:10:16.222 "data_offset": 0, 00:10:16.222 "data_size": 0 00:10:16.222 } 00:10:16.222 ] 00:10:16.222 }' 00:10:16.222 04:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.222 04:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.481 04:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:16.481 04:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.481 04:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.481 [2024-12-13 04:26:16.392473] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:16.481 [2024-12-13 04:26:16.392556] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:10:16.481 04:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.481 04:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:16.481 04:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.481 04:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.481 [2024-12-13 04:26:16.404482] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:16.481 [2024-12-13 04:26:16.404556] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:16.481 [2024-12-13 04:26:16.404582] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:16.481 [2024-12-13 04:26:16.404604] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:16.481 [2024-12-13 04:26:16.404622] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:16.481 [2024-12-13 04:26:16.404642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:16.481 [2024-12-13 04:26:16.404658] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:16.481 [2024-12-13 04:26:16.404678] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:16.481 04:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.481 04:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:16.481 04:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.481 04:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.481 [2024-12-13 04:26:16.431380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:16.481 BaseBdev1 00:10:16.481 04:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.481 04:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:16.481 04:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:16.481 04:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:16.481 04:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:16.481 04:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:16.481 04:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:16.481 04:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:16.481 04:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.481 04:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.481 04:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.481 04:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:16.481 04:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.482 04:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.482 [ 00:10:16.482 { 00:10:16.482 "name": "BaseBdev1", 00:10:16.482 "aliases": [ 00:10:16.482 "acde24bb-15de-46c5-abc3-8d3cf17bea98" 00:10:16.482 ], 00:10:16.482 "product_name": "Malloc disk", 00:10:16.482 "block_size": 512, 00:10:16.482 "num_blocks": 65536, 00:10:16.482 "uuid": "acde24bb-15de-46c5-abc3-8d3cf17bea98", 00:10:16.482 "assigned_rate_limits": { 00:10:16.482 "rw_ios_per_sec": 0, 00:10:16.482 "rw_mbytes_per_sec": 0, 00:10:16.482 "r_mbytes_per_sec": 0, 00:10:16.482 "w_mbytes_per_sec": 0 00:10:16.482 }, 00:10:16.482 "claimed": true, 00:10:16.482 "claim_type": "exclusive_write", 00:10:16.482 "zoned": false, 00:10:16.482 "supported_io_types": { 00:10:16.482 "read": true, 00:10:16.482 "write": true, 00:10:16.482 "unmap": true, 00:10:16.482 "flush": true, 00:10:16.482 "reset": true, 00:10:16.482 "nvme_admin": false, 00:10:16.482 "nvme_io": false, 00:10:16.482 "nvme_io_md": false, 00:10:16.482 "write_zeroes": true, 00:10:16.482 "zcopy": true, 00:10:16.482 "get_zone_info": false, 00:10:16.482 "zone_management": false, 00:10:16.482 "zone_append": false, 00:10:16.482 "compare": false, 00:10:16.482 "compare_and_write": false, 00:10:16.482 "abort": true, 00:10:16.482 "seek_hole": false, 00:10:16.482 "seek_data": false, 00:10:16.482 "copy": true, 00:10:16.482 "nvme_iov_md": false 00:10:16.482 }, 00:10:16.482 "memory_domains": [ 00:10:16.482 { 00:10:16.482 "dma_device_id": "system", 00:10:16.482 "dma_device_type": 1 00:10:16.482 }, 00:10:16.482 { 00:10:16.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.482 "dma_device_type": 2 00:10:16.482 } 00:10:16.482 ], 00:10:16.482 "driver_specific": {} 00:10:16.482 } 00:10:16.482 ] 00:10:16.482 04:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.482 04:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:16.482 04:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:16.482 04:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.482 04:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.482 04:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:16.482 04:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.482 04:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:16.482 04:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.482 04:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.482 04:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.482 04:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.482 04:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.482 04:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.482 04:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.482 04:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.741 04:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.741 04:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.741 "name": "Existed_Raid", 00:10:16.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.741 "strip_size_kb": 64, 00:10:16.741 "state": "configuring", 00:10:16.741 "raid_level": "concat", 00:10:16.741 "superblock": false, 00:10:16.741 "num_base_bdevs": 4, 00:10:16.741 "num_base_bdevs_discovered": 1, 00:10:16.741 "num_base_bdevs_operational": 4, 00:10:16.741 "base_bdevs_list": [ 00:10:16.741 { 00:10:16.741 "name": "BaseBdev1", 00:10:16.741 "uuid": "acde24bb-15de-46c5-abc3-8d3cf17bea98", 00:10:16.741 "is_configured": true, 00:10:16.741 "data_offset": 0, 00:10:16.741 "data_size": 65536 00:10:16.741 }, 00:10:16.741 { 00:10:16.741 "name": "BaseBdev2", 00:10:16.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.741 "is_configured": false, 00:10:16.741 "data_offset": 0, 00:10:16.741 "data_size": 0 00:10:16.741 }, 00:10:16.741 { 00:10:16.741 "name": "BaseBdev3", 00:10:16.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.741 "is_configured": false, 00:10:16.741 "data_offset": 0, 00:10:16.741 "data_size": 0 00:10:16.741 }, 00:10:16.741 { 00:10:16.741 "name": "BaseBdev4", 00:10:16.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.741 "is_configured": false, 00:10:16.741 "data_offset": 0, 00:10:16.741 "data_size": 0 00:10:16.741 } 00:10:16.741 ] 00:10:16.741 }' 00:10:16.741 04:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.741 04:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.000 04:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:17.000 04:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.000 04:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.000 [2024-12-13 04:26:16.898577] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:17.000 [2024-12-13 04:26:16.898651] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:10:17.001 04:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.001 04:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:17.001 04:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.001 04:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.001 [2024-12-13 04:26:16.910611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:17.001 [2024-12-13 04:26:16.912620] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:17.001 [2024-12-13 04:26:16.912659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:17.001 [2024-12-13 04:26:16.912668] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:17.001 [2024-12-13 04:26:16.912676] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:17.001 [2024-12-13 04:26:16.912682] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:17.001 [2024-12-13 04:26:16.912690] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:17.001 04:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.001 04:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:17.001 04:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:17.001 04:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:17.001 04:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.001 04:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.001 04:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:17.001 04:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.001 04:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.001 04:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.001 04:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.001 04:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.001 04:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.001 04:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.001 04:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.001 04:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.001 04:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.001 04:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.001 04:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.001 "name": "Existed_Raid", 00:10:17.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.001 "strip_size_kb": 64, 00:10:17.001 "state": "configuring", 00:10:17.001 "raid_level": "concat", 00:10:17.001 "superblock": false, 00:10:17.001 "num_base_bdevs": 4, 00:10:17.001 "num_base_bdevs_discovered": 1, 00:10:17.001 "num_base_bdevs_operational": 4, 00:10:17.001 "base_bdevs_list": [ 00:10:17.001 { 00:10:17.001 "name": "BaseBdev1", 00:10:17.001 "uuid": "acde24bb-15de-46c5-abc3-8d3cf17bea98", 00:10:17.001 "is_configured": true, 00:10:17.001 "data_offset": 0, 00:10:17.001 "data_size": 65536 00:10:17.001 }, 00:10:17.001 { 00:10:17.001 "name": "BaseBdev2", 00:10:17.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.001 "is_configured": false, 00:10:17.001 "data_offset": 0, 00:10:17.001 "data_size": 0 00:10:17.001 }, 00:10:17.001 { 00:10:17.001 "name": "BaseBdev3", 00:10:17.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.001 "is_configured": false, 00:10:17.001 "data_offset": 0, 00:10:17.001 "data_size": 0 00:10:17.001 }, 00:10:17.001 { 00:10:17.001 "name": "BaseBdev4", 00:10:17.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.001 "is_configured": false, 00:10:17.001 "data_offset": 0, 00:10:17.001 "data_size": 0 00:10:17.001 } 00:10:17.001 ] 00:10:17.001 }' 00:10:17.001 04:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.001 04:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.569 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:17.569 04:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.569 04:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.569 [2024-12-13 04:26:17.362428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:17.569 BaseBdev2 00:10:17.569 04:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.569 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:17.569 04:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:17.569 04:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:17.569 04:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:17.569 04:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:17.569 04:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:17.569 04:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:17.569 04:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.569 04:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.569 04:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.569 04:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:17.569 04:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.569 04:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.569 [ 00:10:17.569 { 00:10:17.569 "name": "BaseBdev2", 00:10:17.569 "aliases": [ 00:10:17.569 "4c2c6629-de65-47b8-b442-b28504dda951" 00:10:17.569 ], 00:10:17.569 "product_name": "Malloc disk", 00:10:17.569 "block_size": 512, 00:10:17.569 "num_blocks": 65536, 00:10:17.569 "uuid": "4c2c6629-de65-47b8-b442-b28504dda951", 00:10:17.569 "assigned_rate_limits": { 00:10:17.569 "rw_ios_per_sec": 0, 00:10:17.569 "rw_mbytes_per_sec": 0, 00:10:17.569 "r_mbytes_per_sec": 0, 00:10:17.569 "w_mbytes_per_sec": 0 00:10:17.569 }, 00:10:17.569 "claimed": true, 00:10:17.569 "claim_type": "exclusive_write", 00:10:17.569 "zoned": false, 00:10:17.569 "supported_io_types": { 00:10:17.569 "read": true, 00:10:17.569 "write": true, 00:10:17.569 "unmap": true, 00:10:17.569 "flush": true, 00:10:17.569 "reset": true, 00:10:17.569 "nvme_admin": false, 00:10:17.569 "nvme_io": false, 00:10:17.569 "nvme_io_md": false, 00:10:17.569 "write_zeroes": true, 00:10:17.569 "zcopy": true, 00:10:17.569 "get_zone_info": false, 00:10:17.569 "zone_management": false, 00:10:17.569 "zone_append": false, 00:10:17.569 "compare": false, 00:10:17.569 "compare_and_write": false, 00:10:17.569 "abort": true, 00:10:17.569 "seek_hole": false, 00:10:17.569 "seek_data": false, 00:10:17.569 "copy": true, 00:10:17.569 "nvme_iov_md": false 00:10:17.569 }, 00:10:17.569 "memory_domains": [ 00:10:17.569 { 00:10:17.569 "dma_device_id": "system", 00:10:17.569 "dma_device_type": 1 00:10:17.569 }, 00:10:17.569 { 00:10:17.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.569 "dma_device_type": 2 00:10:17.569 } 00:10:17.569 ], 00:10:17.569 "driver_specific": {} 00:10:17.569 } 00:10:17.569 ] 00:10:17.569 04:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.569 04:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:17.569 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:17.569 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:17.569 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:17.569 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.569 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.569 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:17.569 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.569 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.569 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.569 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.569 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.569 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.569 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.569 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.569 04:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.569 04:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.569 04:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.569 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.569 "name": "Existed_Raid", 00:10:17.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.569 "strip_size_kb": 64, 00:10:17.569 "state": "configuring", 00:10:17.569 "raid_level": "concat", 00:10:17.569 "superblock": false, 00:10:17.569 "num_base_bdevs": 4, 00:10:17.569 "num_base_bdevs_discovered": 2, 00:10:17.569 "num_base_bdevs_operational": 4, 00:10:17.569 "base_bdevs_list": [ 00:10:17.569 { 00:10:17.569 "name": "BaseBdev1", 00:10:17.569 "uuid": "acde24bb-15de-46c5-abc3-8d3cf17bea98", 00:10:17.569 "is_configured": true, 00:10:17.569 "data_offset": 0, 00:10:17.569 "data_size": 65536 00:10:17.569 }, 00:10:17.569 { 00:10:17.569 "name": "BaseBdev2", 00:10:17.569 "uuid": "4c2c6629-de65-47b8-b442-b28504dda951", 00:10:17.569 "is_configured": true, 00:10:17.569 "data_offset": 0, 00:10:17.569 "data_size": 65536 00:10:17.569 }, 00:10:17.569 { 00:10:17.569 "name": "BaseBdev3", 00:10:17.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.569 "is_configured": false, 00:10:17.569 "data_offset": 0, 00:10:17.569 "data_size": 0 00:10:17.569 }, 00:10:17.569 { 00:10:17.569 "name": "BaseBdev4", 00:10:17.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.569 "is_configured": false, 00:10:17.569 "data_offset": 0, 00:10:17.569 "data_size": 0 00:10:17.569 } 00:10:17.569 ] 00:10:17.569 }' 00:10:17.569 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.569 04:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.828 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:17.828 04:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.828 04:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.088 [2024-12-13 04:26:17.850717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:18.088 BaseBdev3 00:10:18.088 04:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.088 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:18.088 04:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:18.088 04:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:18.088 04:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:18.088 04:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:18.088 04:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:18.088 04:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:18.088 04:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.088 04:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.088 04:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.088 04:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:18.088 04:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.088 04:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.088 [ 00:10:18.088 { 00:10:18.088 "name": "BaseBdev3", 00:10:18.088 "aliases": [ 00:10:18.088 "9d1fae99-ae2f-4ed0-8e15-10063ebd8cc2" 00:10:18.088 ], 00:10:18.088 "product_name": "Malloc disk", 00:10:18.088 "block_size": 512, 00:10:18.088 "num_blocks": 65536, 00:10:18.088 "uuid": "9d1fae99-ae2f-4ed0-8e15-10063ebd8cc2", 00:10:18.088 "assigned_rate_limits": { 00:10:18.088 "rw_ios_per_sec": 0, 00:10:18.088 "rw_mbytes_per_sec": 0, 00:10:18.088 "r_mbytes_per_sec": 0, 00:10:18.088 "w_mbytes_per_sec": 0 00:10:18.088 }, 00:10:18.088 "claimed": true, 00:10:18.088 "claim_type": "exclusive_write", 00:10:18.088 "zoned": false, 00:10:18.088 "supported_io_types": { 00:10:18.088 "read": true, 00:10:18.088 "write": true, 00:10:18.088 "unmap": true, 00:10:18.088 "flush": true, 00:10:18.088 "reset": true, 00:10:18.088 "nvme_admin": false, 00:10:18.088 "nvme_io": false, 00:10:18.088 "nvme_io_md": false, 00:10:18.088 "write_zeroes": true, 00:10:18.088 "zcopy": true, 00:10:18.088 "get_zone_info": false, 00:10:18.088 "zone_management": false, 00:10:18.088 "zone_append": false, 00:10:18.088 "compare": false, 00:10:18.088 "compare_and_write": false, 00:10:18.088 "abort": true, 00:10:18.088 "seek_hole": false, 00:10:18.088 "seek_data": false, 00:10:18.088 "copy": true, 00:10:18.088 "nvme_iov_md": false 00:10:18.088 }, 00:10:18.088 "memory_domains": [ 00:10:18.088 { 00:10:18.088 "dma_device_id": "system", 00:10:18.088 "dma_device_type": 1 00:10:18.088 }, 00:10:18.088 { 00:10:18.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.088 "dma_device_type": 2 00:10:18.088 } 00:10:18.088 ], 00:10:18.088 "driver_specific": {} 00:10:18.088 } 00:10:18.088 ] 00:10:18.088 04:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.088 04:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:18.088 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:18.088 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:18.088 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:18.088 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.088 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.088 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:18.088 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.088 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.088 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.088 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.088 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.088 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.088 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.088 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.088 04:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.088 04:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.088 04:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.088 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.088 "name": "Existed_Raid", 00:10:18.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.088 "strip_size_kb": 64, 00:10:18.088 "state": "configuring", 00:10:18.088 "raid_level": "concat", 00:10:18.088 "superblock": false, 00:10:18.088 "num_base_bdevs": 4, 00:10:18.088 "num_base_bdevs_discovered": 3, 00:10:18.088 "num_base_bdevs_operational": 4, 00:10:18.088 "base_bdevs_list": [ 00:10:18.088 { 00:10:18.088 "name": "BaseBdev1", 00:10:18.088 "uuid": "acde24bb-15de-46c5-abc3-8d3cf17bea98", 00:10:18.088 "is_configured": true, 00:10:18.088 "data_offset": 0, 00:10:18.088 "data_size": 65536 00:10:18.088 }, 00:10:18.088 { 00:10:18.088 "name": "BaseBdev2", 00:10:18.088 "uuid": "4c2c6629-de65-47b8-b442-b28504dda951", 00:10:18.088 "is_configured": true, 00:10:18.088 "data_offset": 0, 00:10:18.089 "data_size": 65536 00:10:18.089 }, 00:10:18.089 { 00:10:18.089 "name": "BaseBdev3", 00:10:18.089 "uuid": "9d1fae99-ae2f-4ed0-8e15-10063ebd8cc2", 00:10:18.089 "is_configured": true, 00:10:18.089 "data_offset": 0, 00:10:18.089 "data_size": 65536 00:10:18.089 }, 00:10:18.089 { 00:10:18.089 "name": "BaseBdev4", 00:10:18.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.089 "is_configured": false, 00:10:18.089 "data_offset": 0, 00:10:18.089 "data_size": 0 00:10:18.089 } 00:10:18.089 ] 00:10:18.089 }' 00:10:18.089 04:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.089 04:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.348 04:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:18.348 04:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.348 04:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.348 [2024-12-13 04:26:18.350735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:18.348 [2024-12-13 04:26:18.350790] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:10:18.348 [2024-12-13 04:26:18.350808] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:18.348 [2024-12-13 04:26:18.351161] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:18.348 [2024-12-13 04:26:18.351317] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:10:18.348 [2024-12-13 04:26:18.351334] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:10:18.348 [2024-12-13 04:26:18.351583] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:18.348 BaseBdev4 00:10:18.348 04:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.348 04:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:18.348 04:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:18.348 04:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:18.348 04:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:18.348 04:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:18.348 04:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:18.348 04:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:18.348 04:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.348 04:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.348 04:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.608 04:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:18.608 04:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.608 04:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.608 [ 00:10:18.608 { 00:10:18.608 "name": "BaseBdev4", 00:10:18.608 "aliases": [ 00:10:18.608 "437cc797-9262-466f-8da7-b5a8c2849b36" 00:10:18.608 ], 00:10:18.608 "product_name": "Malloc disk", 00:10:18.608 "block_size": 512, 00:10:18.608 "num_blocks": 65536, 00:10:18.608 "uuid": "437cc797-9262-466f-8da7-b5a8c2849b36", 00:10:18.608 "assigned_rate_limits": { 00:10:18.608 "rw_ios_per_sec": 0, 00:10:18.608 "rw_mbytes_per_sec": 0, 00:10:18.608 "r_mbytes_per_sec": 0, 00:10:18.608 "w_mbytes_per_sec": 0 00:10:18.608 }, 00:10:18.608 "claimed": true, 00:10:18.608 "claim_type": "exclusive_write", 00:10:18.608 "zoned": false, 00:10:18.608 "supported_io_types": { 00:10:18.608 "read": true, 00:10:18.608 "write": true, 00:10:18.608 "unmap": true, 00:10:18.608 "flush": true, 00:10:18.608 "reset": true, 00:10:18.608 "nvme_admin": false, 00:10:18.608 "nvme_io": false, 00:10:18.608 "nvme_io_md": false, 00:10:18.608 "write_zeroes": true, 00:10:18.608 "zcopy": true, 00:10:18.608 "get_zone_info": false, 00:10:18.608 "zone_management": false, 00:10:18.608 "zone_append": false, 00:10:18.608 "compare": false, 00:10:18.608 "compare_and_write": false, 00:10:18.608 "abort": true, 00:10:18.608 "seek_hole": false, 00:10:18.608 "seek_data": false, 00:10:18.608 "copy": true, 00:10:18.608 "nvme_iov_md": false 00:10:18.608 }, 00:10:18.608 "memory_domains": [ 00:10:18.608 { 00:10:18.608 "dma_device_id": "system", 00:10:18.608 "dma_device_type": 1 00:10:18.608 }, 00:10:18.608 { 00:10:18.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.608 "dma_device_type": 2 00:10:18.608 } 00:10:18.608 ], 00:10:18.608 "driver_specific": {} 00:10:18.608 } 00:10:18.608 ] 00:10:18.608 04:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.608 04:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:18.608 04:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:18.608 04:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:18.608 04:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:18.608 04:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.608 04:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:18.608 04:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:18.608 04:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.608 04:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.608 04:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.608 04:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.608 04:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.608 04:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.608 04:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.608 04:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.608 04:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.608 04:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.608 04:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.608 04:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.608 "name": "Existed_Raid", 00:10:18.608 "uuid": "9933875c-ba20-49ec-a1c2-4b7e5c97c10f", 00:10:18.608 "strip_size_kb": 64, 00:10:18.608 "state": "online", 00:10:18.608 "raid_level": "concat", 00:10:18.608 "superblock": false, 00:10:18.608 "num_base_bdevs": 4, 00:10:18.608 "num_base_bdevs_discovered": 4, 00:10:18.608 "num_base_bdevs_operational": 4, 00:10:18.608 "base_bdevs_list": [ 00:10:18.608 { 00:10:18.608 "name": "BaseBdev1", 00:10:18.608 "uuid": "acde24bb-15de-46c5-abc3-8d3cf17bea98", 00:10:18.608 "is_configured": true, 00:10:18.608 "data_offset": 0, 00:10:18.608 "data_size": 65536 00:10:18.608 }, 00:10:18.608 { 00:10:18.608 "name": "BaseBdev2", 00:10:18.608 "uuid": "4c2c6629-de65-47b8-b442-b28504dda951", 00:10:18.608 "is_configured": true, 00:10:18.609 "data_offset": 0, 00:10:18.609 "data_size": 65536 00:10:18.609 }, 00:10:18.609 { 00:10:18.609 "name": "BaseBdev3", 00:10:18.609 "uuid": "9d1fae99-ae2f-4ed0-8e15-10063ebd8cc2", 00:10:18.609 "is_configured": true, 00:10:18.609 "data_offset": 0, 00:10:18.609 "data_size": 65536 00:10:18.609 }, 00:10:18.609 { 00:10:18.609 "name": "BaseBdev4", 00:10:18.609 "uuid": "437cc797-9262-466f-8da7-b5a8c2849b36", 00:10:18.609 "is_configured": true, 00:10:18.609 "data_offset": 0, 00:10:18.609 "data_size": 65536 00:10:18.609 } 00:10:18.609 ] 00:10:18.609 }' 00:10:18.609 04:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.609 04:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.868 04:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:18.868 04:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:18.868 04:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:18.868 04:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:18.868 04:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:18.868 04:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:18.868 04:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:18.868 04:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:18.868 04:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.868 04:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.868 [2024-12-13 04:26:18.830214] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:18.868 04:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.868 04:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:18.868 "name": "Existed_Raid", 00:10:18.868 "aliases": [ 00:10:18.868 "9933875c-ba20-49ec-a1c2-4b7e5c97c10f" 00:10:18.868 ], 00:10:18.868 "product_name": "Raid Volume", 00:10:18.868 "block_size": 512, 00:10:18.868 "num_blocks": 262144, 00:10:18.868 "uuid": "9933875c-ba20-49ec-a1c2-4b7e5c97c10f", 00:10:18.868 "assigned_rate_limits": { 00:10:18.868 "rw_ios_per_sec": 0, 00:10:18.868 "rw_mbytes_per_sec": 0, 00:10:18.868 "r_mbytes_per_sec": 0, 00:10:18.868 "w_mbytes_per_sec": 0 00:10:18.868 }, 00:10:18.868 "claimed": false, 00:10:18.868 "zoned": false, 00:10:18.868 "supported_io_types": { 00:10:18.868 "read": true, 00:10:18.868 "write": true, 00:10:18.868 "unmap": true, 00:10:18.868 "flush": true, 00:10:18.868 "reset": true, 00:10:18.868 "nvme_admin": false, 00:10:18.868 "nvme_io": false, 00:10:18.868 "nvme_io_md": false, 00:10:18.868 "write_zeroes": true, 00:10:18.868 "zcopy": false, 00:10:18.868 "get_zone_info": false, 00:10:18.868 "zone_management": false, 00:10:18.868 "zone_append": false, 00:10:18.868 "compare": false, 00:10:18.868 "compare_and_write": false, 00:10:18.868 "abort": false, 00:10:18.868 "seek_hole": false, 00:10:18.868 "seek_data": false, 00:10:18.868 "copy": false, 00:10:18.868 "nvme_iov_md": false 00:10:18.868 }, 00:10:18.868 "memory_domains": [ 00:10:18.868 { 00:10:18.868 "dma_device_id": "system", 00:10:18.868 "dma_device_type": 1 00:10:18.868 }, 00:10:18.868 { 00:10:18.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.868 "dma_device_type": 2 00:10:18.868 }, 00:10:18.868 { 00:10:18.868 "dma_device_id": "system", 00:10:18.868 "dma_device_type": 1 00:10:18.868 }, 00:10:18.868 { 00:10:18.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.868 "dma_device_type": 2 00:10:18.868 }, 00:10:18.868 { 00:10:18.868 "dma_device_id": "system", 00:10:18.868 "dma_device_type": 1 00:10:18.868 }, 00:10:18.868 { 00:10:18.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.868 "dma_device_type": 2 00:10:18.868 }, 00:10:18.868 { 00:10:18.868 "dma_device_id": "system", 00:10:18.868 "dma_device_type": 1 00:10:18.868 }, 00:10:18.868 { 00:10:18.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.868 "dma_device_type": 2 00:10:18.868 } 00:10:18.868 ], 00:10:18.868 "driver_specific": { 00:10:18.868 "raid": { 00:10:18.868 "uuid": "9933875c-ba20-49ec-a1c2-4b7e5c97c10f", 00:10:18.868 "strip_size_kb": 64, 00:10:18.868 "state": "online", 00:10:18.868 "raid_level": "concat", 00:10:18.868 "superblock": false, 00:10:18.868 "num_base_bdevs": 4, 00:10:18.869 "num_base_bdevs_discovered": 4, 00:10:18.869 "num_base_bdevs_operational": 4, 00:10:18.869 "base_bdevs_list": [ 00:10:18.869 { 00:10:18.869 "name": "BaseBdev1", 00:10:18.869 "uuid": "acde24bb-15de-46c5-abc3-8d3cf17bea98", 00:10:18.869 "is_configured": true, 00:10:18.869 "data_offset": 0, 00:10:18.869 "data_size": 65536 00:10:18.869 }, 00:10:18.869 { 00:10:18.869 "name": "BaseBdev2", 00:10:18.869 "uuid": "4c2c6629-de65-47b8-b442-b28504dda951", 00:10:18.869 "is_configured": true, 00:10:18.869 "data_offset": 0, 00:10:18.869 "data_size": 65536 00:10:18.869 }, 00:10:18.869 { 00:10:18.869 "name": "BaseBdev3", 00:10:18.869 "uuid": "9d1fae99-ae2f-4ed0-8e15-10063ebd8cc2", 00:10:18.869 "is_configured": true, 00:10:18.869 "data_offset": 0, 00:10:18.869 "data_size": 65536 00:10:18.869 }, 00:10:18.869 { 00:10:18.869 "name": "BaseBdev4", 00:10:18.869 "uuid": "437cc797-9262-466f-8da7-b5a8c2849b36", 00:10:18.869 "is_configured": true, 00:10:18.869 "data_offset": 0, 00:10:18.869 "data_size": 65536 00:10:18.869 } 00:10:18.869 ] 00:10:18.869 } 00:10:18.869 } 00:10:18.869 }' 00:10:18.869 04:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:19.128 04:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:19.128 BaseBdev2 00:10:19.128 BaseBdev3 00:10:19.128 BaseBdev4' 00:10:19.128 04:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.128 04:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:19.128 04:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.128 04:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.128 04:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:19.128 04:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.128 04:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.128 04:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.128 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.128 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.128 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.128 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.128 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:19.128 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.128 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.128 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.129 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.129 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.129 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.129 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:19.129 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.129 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.129 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.129 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.129 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.129 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.129 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.129 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:19.129 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.129 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.129 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.129 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.129 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.129 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.129 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:19.129 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.129 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.129 [2024-12-13 04:26:19.137526] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:19.129 [2024-12-13 04:26:19.137558] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:19.129 [2024-12-13 04:26:19.137628] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:19.388 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.388 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:19.388 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:19.388 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:19.388 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:19.388 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:19.388 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:19.388 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.388 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:19.388 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:19.388 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.388 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:19.388 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.388 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.388 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.388 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.388 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.388 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.388 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.388 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.388 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.388 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.388 "name": "Existed_Raid", 00:10:19.388 "uuid": "9933875c-ba20-49ec-a1c2-4b7e5c97c10f", 00:10:19.388 "strip_size_kb": 64, 00:10:19.388 "state": "offline", 00:10:19.388 "raid_level": "concat", 00:10:19.388 "superblock": false, 00:10:19.388 "num_base_bdevs": 4, 00:10:19.388 "num_base_bdevs_discovered": 3, 00:10:19.388 "num_base_bdevs_operational": 3, 00:10:19.388 "base_bdevs_list": [ 00:10:19.388 { 00:10:19.388 "name": null, 00:10:19.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.388 "is_configured": false, 00:10:19.388 "data_offset": 0, 00:10:19.388 "data_size": 65536 00:10:19.388 }, 00:10:19.388 { 00:10:19.388 "name": "BaseBdev2", 00:10:19.388 "uuid": "4c2c6629-de65-47b8-b442-b28504dda951", 00:10:19.388 "is_configured": true, 00:10:19.388 "data_offset": 0, 00:10:19.388 "data_size": 65536 00:10:19.388 }, 00:10:19.388 { 00:10:19.388 "name": "BaseBdev3", 00:10:19.388 "uuid": "9d1fae99-ae2f-4ed0-8e15-10063ebd8cc2", 00:10:19.388 "is_configured": true, 00:10:19.388 "data_offset": 0, 00:10:19.388 "data_size": 65536 00:10:19.388 }, 00:10:19.388 { 00:10:19.388 "name": "BaseBdev4", 00:10:19.388 "uuid": "437cc797-9262-466f-8da7-b5a8c2849b36", 00:10:19.388 "is_configured": true, 00:10:19.388 "data_offset": 0, 00:10:19.388 "data_size": 65536 00:10:19.388 } 00:10:19.388 ] 00:10:19.388 }' 00:10:19.388 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.388 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.647 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:19.647 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:19.647 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.647 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:19.647 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.647 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.647 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.647 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:19.647 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:19.647 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:19.647 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.647 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.647 [2024-12-13 04:26:19.641531] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.906 [2024-12-13 04:26:19.705832] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.906 [2024-12-13 04:26:19.770161] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:19.906 [2024-12-13 04:26:19.770213] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.906 BaseBdev2 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.906 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:19.907 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.907 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.907 [ 00:10:19.907 { 00:10:19.907 "name": "BaseBdev2", 00:10:19.907 "aliases": [ 00:10:19.907 "ebf87063-5b83-4bf6-a141-2ccd9f8ab26d" 00:10:19.907 ], 00:10:19.907 "product_name": "Malloc disk", 00:10:19.907 "block_size": 512, 00:10:19.907 "num_blocks": 65536, 00:10:19.907 "uuid": "ebf87063-5b83-4bf6-a141-2ccd9f8ab26d", 00:10:19.907 "assigned_rate_limits": { 00:10:19.907 "rw_ios_per_sec": 0, 00:10:19.907 "rw_mbytes_per_sec": 0, 00:10:19.907 "r_mbytes_per_sec": 0, 00:10:19.907 "w_mbytes_per_sec": 0 00:10:19.907 }, 00:10:19.907 "claimed": false, 00:10:19.907 "zoned": false, 00:10:19.907 "supported_io_types": { 00:10:19.907 "read": true, 00:10:19.907 "write": true, 00:10:19.907 "unmap": true, 00:10:19.907 "flush": true, 00:10:19.907 "reset": true, 00:10:19.907 "nvme_admin": false, 00:10:19.907 "nvme_io": false, 00:10:19.907 "nvme_io_md": false, 00:10:19.907 "write_zeroes": true, 00:10:19.907 "zcopy": true, 00:10:19.907 "get_zone_info": false, 00:10:19.907 "zone_management": false, 00:10:19.907 "zone_append": false, 00:10:19.907 "compare": false, 00:10:19.907 "compare_and_write": false, 00:10:19.907 "abort": true, 00:10:19.907 "seek_hole": false, 00:10:19.907 "seek_data": false, 00:10:19.907 "copy": true, 00:10:19.907 "nvme_iov_md": false 00:10:19.907 }, 00:10:19.907 "memory_domains": [ 00:10:19.907 { 00:10:19.907 "dma_device_id": "system", 00:10:19.907 "dma_device_type": 1 00:10:19.907 }, 00:10:19.907 { 00:10:19.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.907 "dma_device_type": 2 00:10:19.907 } 00:10:19.907 ], 00:10:19.907 "driver_specific": {} 00:10:19.907 } 00:10:19.907 ] 00:10:19.907 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.907 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:19.907 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:19.907 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:19.907 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:19.907 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.907 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.907 BaseBdev3 00:10:20.166 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.166 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:20.166 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:20.166 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:20.166 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:20.166 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:20.166 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:20.166 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:20.166 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.166 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.166 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.166 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:20.166 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.166 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.166 [ 00:10:20.166 { 00:10:20.166 "name": "BaseBdev3", 00:10:20.166 "aliases": [ 00:10:20.166 "4bcf97da-a205-45d2-8774-61de07bcb61f" 00:10:20.166 ], 00:10:20.166 "product_name": "Malloc disk", 00:10:20.166 "block_size": 512, 00:10:20.166 "num_blocks": 65536, 00:10:20.166 "uuid": "4bcf97da-a205-45d2-8774-61de07bcb61f", 00:10:20.166 "assigned_rate_limits": { 00:10:20.166 "rw_ios_per_sec": 0, 00:10:20.166 "rw_mbytes_per_sec": 0, 00:10:20.166 "r_mbytes_per_sec": 0, 00:10:20.166 "w_mbytes_per_sec": 0 00:10:20.166 }, 00:10:20.166 "claimed": false, 00:10:20.166 "zoned": false, 00:10:20.166 "supported_io_types": { 00:10:20.166 "read": true, 00:10:20.166 "write": true, 00:10:20.166 "unmap": true, 00:10:20.166 "flush": true, 00:10:20.166 "reset": true, 00:10:20.166 "nvme_admin": false, 00:10:20.166 "nvme_io": false, 00:10:20.166 "nvme_io_md": false, 00:10:20.166 "write_zeroes": true, 00:10:20.166 "zcopy": true, 00:10:20.166 "get_zone_info": false, 00:10:20.166 "zone_management": false, 00:10:20.166 "zone_append": false, 00:10:20.166 "compare": false, 00:10:20.166 "compare_and_write": false, 00:10:20.166 "abort": true, 00:10:20.166 "seek_hole": false, 00:10:20.166 "seek_data": false, 00:10:20.166 "copy": true, 00:10:20.166 "nvme_iov_md": false 00:10:20.166 }, 00:10:20.166 "memory_domains": [ 00:10:20.166 { 00:10:20.166 "dma_device_id": "system", 00:10:20.166 "dma_device_type": 1 00:10:20.166 }, 00:10:20.166 { 00:10:20.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.166 "dma_device_type": 2 00:10:20.166 } 00:10:20.167 ], 00:10:20.167 "driver_specific": {} 00:10:20.167 } 00:10:20.167 ] 00:10:20.167 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.167 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:20.167 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:20.167 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:20.167 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:20.167 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.167 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.167 BaseBdev4 00:10:20.167 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.167 04:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:20.167 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:20.167 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:20.167 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:20.167 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:20.167 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:20.167 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:20.167 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.167 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.167 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.167 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:20.167 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.167 04:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.167 [ 00:10:20.167 { 00:10:20.167 "name": "BaseBdev4", 00:10:20.167 "aliases": [ 00:10:20.167 "c3344eff-82e8-4bf2-85b8-e6989f501c49" 00:10:20.167 ], 00:10:20.167 "product_name": "Malloc disk", 00:10:20.167 "block_size": 512, 00:10:20.167 "num_blocks": 65536, 00:10:20.167 "uuid": "c3344eff-82e8-4bf2-85b8-e6989f501c49", 00:10:20.167 "assigned_rate_limits": { 00:10:20.167 "rw_ios_per_sec": 0, 00:10:20.167 "rw_mbytes_per_sec": 0, 00:10:20.167 "r_mbytes_per_sec": 0, 00:10:20.167 "w_mbytes_per_sec": 0 00:10:20.167 }, 00:10:20.167 "claimed": false, 00:10:20.167 "zoned": false, 00:10:20.167 "supported_io_types": { 00:10:20.167 "read": true, 00:10:20.167 "write": true, 00:10:20.167 "unmap": true, 00:10:20.167 "flush": true, 00:10:20.167 "reset": true, 00:10:20.167 "nvme_admin": false, 00:10:20.167 "nvme_io": false, 00:10:20.167 "nvme_io_md": false, 00:10:20.167 "write_zeroes": true, 00:10:20.167 "zcopy": true, 00:10:20.167 "get_zone_info": false, 00:10:20.167 "zone_management": false, 00:10:20.167 "zone_append": false, 00:10:20.167 "compare": false, 00:10:20.167 "compare_and_write": false, 00:10:20.167 "abort": true, 00:10:20.167 "seek_hole": false, 00:10:20.167 "seek_data": false, 00:10:20.167 "copy": true, 00:10:20.167 "nvme_iov_md": false 00:10:20.167 }, 00:10:20.167 "memory_domains": [ 00:10:20.167 { 00:10:20.167 "dma_device_id": "system", 00:10:20.167 "dma_device_type": 1 00:10:20.167 }, 00:10:20.167 { 00:10:20.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.167 "dma_device_type": 2 00:10:20.167 } 00:10:20.167 ], 00:10:20.167 "driver_specific": {} 00:10:20.167 } 00:10:20.167 ] 00:10:20.167 04:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.167 04:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:20.167 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:20.167 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:20.167 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:20.167 04:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.167 04:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.167 [2024-12-13 04:26:20.022507] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:20.167 [2024-12-13 04:26:20.022553] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:20.167 [2024-12-13 04:26:20.022592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:20.167 [2024-12-13 04:26:20.024649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:20.167 [2024-12-13 04:26:20.024699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:20.167 04:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.167 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:20.167 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.167 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.167 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:20.167 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.167 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:20.167 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.167 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.167 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.167 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.167 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.167 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.167 04:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.167 04:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.167 04:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.167 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.167 "name": "Existed_Raid", 00:10:20.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.167 "strip_size_kb": 64, 00:10:20.167 "state": "configuring", 00:10:20.167 "raid_level": "concat", 00:10:20.167 "superblock": false, 00:10:20.167 "num_base_bdevs": 4, 00:10:20.167 "num_base_bdevs_discovered": 3, 00:10:20.167 "num_base_bdevs_operational": 4, 00:10:20.167 "base_bdevs_list": [ 00:10:20.167 { 00:10:20.167 "name": "BaseBdev1", 00:10:20.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.167 "is_configured": false, 00:10:20.167 "data_offset": 0, 00:10:20.167 "data_size": 0 00:10:20.167 }, 00:10:20.167 { 00:10:20.167 "name": "BaseBdev2", 00:10:20.167 "uuid": "ebf87063-5b83-4bf6-a141-2ccd9f8ab26d", 00:10:20.167 "is_configured": true, 00:10:20.167 "data_offset": 0, 00:10:20.167 "data_size": 65536 00:10:20.167 }, 00:10:20.167 { 00:10:20.167 "name": "BaseBdev3", 00:10:20.167 "uuid": "4bcf97da-a205-45d2-8774-61de07bcb61f", 00:10:20.167 "is_configured": true, 00:10:20.167 "data_offset": 0, 00:10:20.167 "data_size": 65536 00:10:20.167 }, 00:10:20.167 { 00:10:20.167 "name": "BaseBdev4", 00:10:20.167 "uuid": "c3344eff-82e8-4bf2-85b8-e6989f501c49", 00:10:20.167 "is_configured": true, 00:10:20.167 "data_offset": 0, 00:10:20.167 "data_size": 65536 00:10:20.167 } 00:10:20.167 ] 00:10:20.167 }' 00:10:20.167 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.167 04:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.735 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:20.735 04:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.735 04:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.735 [2024-12-13 04:26:20.481654] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:20.735 04:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.735 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:20.735 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.735 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.735 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:20.735 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.735 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:20.735 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.735 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.735 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.735 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.735 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.735 04:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.735 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.735 04:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.735 04:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.735 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.735 "name": "Existed_Raid", 00:10:20.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.735 "strip_size_kb": 64, 00:10:20.735 "state": "configuring", 00:10:20.735 "raid_level": "concat", 00:10:20.735 "superblock": false, 00:10:20.735 "num_base_bdevs": 4, 00:10:20.735 "num_base_bdevs_discovered": 2, 00:10:20.735 "num_base_bdevs_operational": 4, 00:10:20.735 "base_bdevs_list": [ 00:10:20.735 { 00:10:20.735 "name": "BaseBdev1", 00:10:20.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.735 "is_configured": false, 00:10:20.735 "data_offset": 0, 00:10:20.735 "data_size": 0 00:10:20.735 }, 00:10:20.735 { 00:10:20.735 "name": null, 00:10:20.735 "uuid": "ebf87063-5b83-4bf6-a141-2ccd9f8ab26d", 00:10:20.735 "is_configured": false, 00:10:20.735 "data_offset": 0, 00:10:20.735 "data_size": 65536 00:10:20.735 }, 00:10:20.735 { 00:10:20.735 "name": "BaseBdev3", 00:10:20.735 "uuid": "4bcf97da-a205-45d2-8774-61de07bcb61f", 00:10:20.735 "is_configured": true, 00:10:20.735 "data_offset": 0, 00:10:20.735 "data_size": 65536 00:10:20.735 }, 00:10:20.735 { 00:10:20.735 "name": "BaseBdev4", 00:10:20.735 "uuid": "c3344eff-82e8-4bf2-85b8-e6989f501c49", 00:10:20.735 "is_configured": true, 00:10:20.735 "data_offset": 0, 00:10:20.735 "data_size": 65536 00:10:20.735 } 00:10:20.735 ] 00:10:20.735 }' 00:10:20.735 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.735 04:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.994 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.994 04:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.994 04:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.994 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:20.994 04:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.994 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:20.994 04:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:20.994 04:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.994 04:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.994 [2024-12-13 04:26:21.005436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:20.994 BaseBdev1 00:10:20.994 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.994 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:20.994 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:20.994 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:20.994 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:20.994 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:20.995 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:20.995 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:20.995 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.995 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.254 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.254 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:21.254 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.254 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.254 [ 00:10:21.254 { 00:10:21.254 "name": "BaseBdev1", 00:10:21.254 "aliases": [ 00:10:21.254 "25b709af-705a-4c52-98f3-754b37735bb2" 00:10:21.254 ], 00:10:21.254 "product_name": "Malloc disk", 00:10:21.254 "block_size": 512, 00:10:21.254 "num_blocks": 65536, 00:10:21.254 "uuid": "25b709af-705a-4c52-98f3-754b37735bb2", 00:10:21.254 "assigned_rate_limits": { 00:10:21.254 "rw_ios_per_sec": 0, 00:10:21.254 "rw_mbytes_per_sec": 0, 00:10:21.254 "r_mbytes_per_sec": 0, 00:10:21.254 "w_mbytes_per_sec": 0 00:10:21.254 }, 00:10:21.254 "claimed": true, 00:10:21.254 "claim_type": "exclusive_write", 00:10:21.254 "zoned": false, 00:10:21.254 "supported_io_types": { 00:10:21.254 "read": true, 00:10:21.254 "write": true, 00:10:21.254 "unmap": true, 00:10:21.254 "flush": true, 00:10:21.254 "reset": true, 00:10:21.254 "nvme_admin": false, 00:10:21.254 "nvme_io": false, 00:10:21.254 "nvme_io_md": false, 00:10:21.254 "write_zeroes": true, 00:10:21.254 "zcopy": true, 00:10:21.254 "get_zone_info": false, 00:10:21.254 "zone_management": false, 00:10:21.254 "zone_append": false, 00:10:21.254 "compare": false, 00:10:21.254 "compare_and_write": false, 00:10:21.254 "abort": true, 00:10:21.254 "seek_hole": false, 00:10:21.254 "seek_data": false, 00:10:21.254 "copy": true, 00:10:21.254 "nvme_iov_md": false 00:10:21.254 }, 00:10:21.254 "memory_domains": [ 00:10:21.254 { 00:10:21.254 "dma_device_id": "system", 00:10:21.254 "dma_device_type": 1 00:10:21.254 }, 00:10:21.254 { 00:10:21.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.254 "dma_device_type": 2 00:10:21.254 } 00:10:21.254 ], 00:10:21.254 "driver_specific": {} 00:10:21.254 } 00:10:21.254 ] 00:10:21.254 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.254 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:21.254 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:21.254 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.254 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.254 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:21.254 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.254 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:21.254 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.254 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.254 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.254 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.254 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.254 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.254 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.254 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.254 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.254 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.254 "name": "Existed_Raid", 00:10:21.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.254 "strip_size_kb": 64, 00:10:21.254 "state": "configuring", 00:10:21.254 "raid_level": "concat", 00:10:21.254 "superblock": false, 00:10:21.254 "num_base_bdevs": 4, 00:10:21.254 "num_base_bdevs_discovered": 3, 00:10:21.254 "num_base_bdevs_operational": 4, 00:10:21.254 "base_bdevs_list": [ 00:10:21.254 { 00:10:21.254 "name": "BaseBdev1", 00:10:21.254 "uuid": "25b709af-705a-4c52-98f3-754b37735bb2", 00:10:21.254 "is_configured": true, 00:10:21.254 "data_offset": 0, 00:10:21.254 "data_size": 65536 00:10:21.254 }, 00:10:21.254 { 00:10:21.254 "name": null, 00:10:21.254 "uuid": "ebf87063-5b83-4bf6-a141-2ccd9f8ab26d", 00:10:21.254 "is_configured": false, 00:10:21.254 "data_offset": 0, 00:10:21.254 "data_size": 65536 00:10:21.254 }, 00:10:21.254 { 00:10:21.254 "name": "BaseBdev3", 00:10:21.254 "uuid": "4bcf97da-a205-45d2-8774-61de07bcb61f", 00:10:21.254 "is_configured": true, 00:10:21.254 "data_offset": 0, 00:10:21.254 "data_size": 65536 00:10:21.254 }, 00:10:21.254 { 00:10:21.254 "name": "BaseBdev4", 00:10:21.254 "uuid": "c3344eff-82e8-4bf2-85b8-e6989f501c49", 00:10:21.254 "is_configured": true, 00:10:21.254 "data_offset": 0, 00:10:21.254 "data_size": 65536 00:10:21.254 } 00:10:21.254 ] 00:10:21.254 }' 00:10:21.254 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.254 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.514 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:21.514 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.514 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.514 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.514 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.514 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:21.514 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:21.514 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.514 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.514 [2024-12-13 04:26:21.496640] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:21.514 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.514 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:21.514 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.514 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.514 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:21.514 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.514 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:21.514 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.514 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.514 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.514 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.514 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.514 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.514 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.514 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.773 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.773 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.773 "name": "Existed_Raid", 00:10:21.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.773 "strip_size_kb": 64, 00:10:21.773 "state": "configuring", 00:10:21.773 "raid_level": "concat", 00:10:21.773 "superblock": false, 00:10:21.773 "num_base_bdevs": 4, 00:10:21.773 "num_base_bdevs_discovered": 2, 00:10:21.773 "num_base_bdevs_operational": 4, 00:10:21.773 "base_bdevs_list": [ 00:10:21.773 { 00:10:21.773 "name": "BaseBdev1", 00:10:21.773 "uuid": "25b709af-705a-4c52-98f3-754b37735bb2", 00:10:21.773 "is_configured": true, 00:10:21.773 "data_offset": 0, 00:10:21.773 "data_size": 65536 00:10:21.773 }, 00:10:21.773 { 00:10:21.773 "name": null, 00:10:21.773 "uuid": "ebf87063-5b83-4bf6-a141-2ccd9f8ab26d", 00:10:21.773 "is_configured": false, 00:10:21.773 "data_offset": 0, 00:10:21.773 "data_size": 65536 00:10:21.773 }, 00:10:21.773 { 00:10:21.773 "name": null, 00:10:21.773 "uuid": "4bcf97da-a205-45d2-8774-61de07bcb61f", 00:10:21.773 "is_configured": false, 00:10:21.773 "data_offset": 0, 00:10:21.773 "data_size": 65536 00:10:21.773 }, 00:10:21.773 { 00:10:21.773 "name": "BaseBdev4", 00:10:21.773 "uuid": "c3344eff-82e8-4bf2-85b8-e6989f501c49", 00:10:21.773 "is_configured": true, 00:10:21.773 "data_offset": 0, 00:10:21.773 "data_size": 65536 00:10:21.773 } 00:10:21.773 ] 00:10:21.773 }' 00:10:21.773 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.773 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.033 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:22.033 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.033 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.033 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.033 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.033 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:22.033 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:22.033 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.033 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.033 [2024-12-13 04:26:21.980583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:22.033 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.033 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:22.033 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.033 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.033 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:22.033 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.033 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:22.033 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.033 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.033 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.033 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.033 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.033 04:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.033 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.033 04:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.033 04:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.033 04:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.033 "name": "Existed_Raid", 00:10:22.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.033 "strip_size_kb": 64, 00:10:22.033 "state": "configuring", 00:10:22.033 "raid_level": "concat", 00:10:22.033 "superblock": false, 00:10:22.033 "num_base_bdevs": 4, 00:10:22.033 "num_base_bdevs_discovered": 3, 00:10:22.033 "num_base_bdevs_operational": 4, 00:10:22.033 "base_bdevs_list": [ 00:10:22.033 { 00:10:22.033 "name": "BaseBdev1", 00:10:22.033 "uuid": "25b709af-705a-4c52-98f3-754b37735bb2", 00:10:22.033 "is_configured": true, 00:10:22.033 "data_offset": 0, 00:10:22.033 "data_size": 65536 00:10:22.033 }, 00:10:22.033 { 00:10:22.033 "name": null, 00:10:22.033 "uuid": "ebf87063-5b83-4bf6-a141-2ccd9f8ab26d", 00:10:22.033 "is_configured": false, 00:10:22.033 "data_offset": 0, 00:10:22.033 "data_size": 65536 00:10:22.033 }, 00:10:22.033 { 00:10:22.033 "name": "BaseBdev3", 00:10:22.033 "uuid": "4bcf97da-a205-45d2-8774-61de07bcb61f", 00:10:22.033 "is_configured": true, 00:10:22.033 "data_offset": 0, 00:10:22.033 "data_size": 65536 00:10:22.033 }, 00:10:22.033 { 00:10:22.033 "name": "BaseBdev4", 00:10:22.033 "uuid": "c3344eff-82e8-4bf2-85b8-e6989f501c49", 00:10:22.033 "is_configured": true, 00:10:22.033 "data_offset": 0, 00:10:22.033 "data_size": 65536 00:10:22.033 } 00:10:22.033 ] 00:10:22.033 }' 00:10:22.033 04:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.033 04:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.601 04:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:22.601 04:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.601 04:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.601 04:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.601 04:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.601 04:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:22.601 04:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:22.601 04:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.601 04:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.601 [2024-12-13 04:26:22.480026] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:22.601 04:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.601 04:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:22.601 04:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.601 04:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.601 04:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:22.601 04:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.601 04:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:22.601 04:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.601 04:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.602 04:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.602 04:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.602 04:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.602 04:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.602 04:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.602 04:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.602 04:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.602 04:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.602 "name": "Existed_Raid", 00:10:22.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.602 "strip_size_kb": 64, 00:10:22.602 "state": "configuring", 00:10:22.602 "raid_level": "concat", 00:10:22.602 "superblock": false, 00:10:22.602 "num_base_bdevs": 4, 00:10:22.602 "num_base_bdevs_discovered": 2, 00:10:22.602 "num_base_bdevs_operational": 4, 00:10:22.602 "base_bdevs_list": [ 00:10:22.602 { 00:10:22.602 "name": null, 00:10:22.602 "uuid": "25b709af-705a-4c52-98f3-754b37735bb2", 00:10:22.602 "is_configured": false, 00:10:22.602 "data_offset": 0, 00:10:22.602 "data_size": 65536 00:10:22.602 }, 00:10:22.602 { 00:10:22.602 "name": null, 00:10:22.602 "uuid": "ebf87063-5b83-4bf6-a141-2ccd9f8ab26d", 00:10:22.602 "is_configured": false, 00:10:22.602 "data_offset": 0, 00:10:22.602 "data_size": 65536 00:10:22.602 }, 00:10:22.602 { 00:10:22.602 "name": "BaseBdev3", 00:10:22.602 "uuid": "4bcf97da-a205-45d2-8774-61de07bcb61f", 00:10:22.602 "is_configured": true, 00:10:22.602 "data_offset": 0, 00:10:22.602 "data_size": 65536 00:10:22.602 }, 00:10:22.602 { 00:10:22.602 "name": "BaseBdev4", 00:10:22.602 "uuid": "c3344eff-82e8-4bf2-85b8-e6989f501c49", 00:10:22.602 "is_configured": true, 00:10:22.602 "data_offset": 0, 00:10:22.602 "data_size": 65536 00:10:22.602 } 00:10:22.602 ] 00:10:22.602 }' 00:10:22.602 04:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.602 04:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.169 04:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.169 04:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.169 04:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:23.169 04:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.169 04:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.169 04:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:23.169 04:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:23.169 04:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.169 04:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.169 [2024-12-13 04:26:22.970665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:23.169 04:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.169 04:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:23.169 04:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.169 04:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.169 04:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:23.169 04:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:23.169 04:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:23.169 04:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.169 04:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.169 04:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.169 04:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.169 04:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.169 04:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.169 04:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.169 04:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.170 04:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.170 04:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.170 "name": "Existed_Raid", 00:10:23.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.170 "strip_size_kb": 64, 00:10:23.170 "state": "configuring", 00:10:23.170 "raid_level": "concat", 00:10:23.170 "superblock": false, 00:10:23.170 "num_base_bdevs": 4, 00:10:23.170 "num_base_bdevs_discovered": 3, 00:10:23.170 "num_base_bdevs_operational": 4, 00:10:23.170 "base_bdevs_list": [ 00:10:23.170 { 00:10:23.170 "name": null, 00:10:23.170 "uuid": "25b709af-705a-4c52-98f3-754b37735bb2", 00:10:23.170 "is_configured": false, 00:10:23.170 "data_offset": 0, 00:10:23.170 "data_size": 65536 00:10:23.170 }, 00:10:23.170 { 00:10:23.170 "name": "BaseBdev2", 00:10:23.170 "uuid": "ebf87063-5b83-4bf6-a141-2ccd9f8ab26d", 00:10:23.170 "is_configured": true, 00:10:23.170 "data_offset": 0, 00:10:23.170 "data_size": 65536 00:10:23.170 }, 00:10:23.170 { 00:10:23.170 "name": "BaseBdev3", 00:10:23.170 "uuid": "4bcf97da-a205-45d2-8774-61de07bcb61f", 00:10:23.170 "is_configured": true, 00:10:23.170 "data_offset": 0, 00:10:23.170 "data_size": 65536 00:10:23.170 }, 00:10:23.170 { 00:10:23.170 "name": "BaseBdev4", 00:10:23.170 "uuid": "c3344eff-82e8-4bf2-85b8-e6989f501c49", 00:10:23.170 "is_configured": true, 00:10:23.170 "data_offset": 0, 00:10:23.170 "data_size": 65536 00:10:23.170 } 00:10:23.170 ] 00:10:23.170 }' 00:10:23.170 04:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.170 04:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.429 04:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.429 04:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.429 04:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.429 04:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:23.429 04:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.429 04:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:23.429 04:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.429 04:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.429 04:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.429 04:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:23.429 04:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.688 04:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 25b709af-705a-4c52-98f3-754b37735bb2 00:10:23.688 04:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.688 04:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.688 [2024-12-13 04:26:23.474667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:23.688 [2024-12-13 04:26:23.474739] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:10:23.688 [2024-12-13 04:26:23.474747] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:23.688 [2024-12-13 04:26:23.475032] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:10:23.688 [2024-12-13 04:26:23.475169] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:10:23.688 [2024-12-13 04:26:23.475185] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:10:23.688 [2024-12-13 04:26:23.475376] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:23.688 NewBaseBdev 00:10:23.688 04:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.688 04:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:23.688 04:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:23.688 04:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:23.688 04:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:23.689 04:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:23.689 04:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:23.689 04:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:23.689 04:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.689 04:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.689 04:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.689 04:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:23.689 04:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.689 04:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.689 [ 00:10:23.689 { 00:10:23.689 "name": "NewBaseBdev", 00:10:23.689 "aliases": [ 00:10:23.689 "25b709af-705a-4c52-98f3-754b37735bb2" 00:10:23.689 ], 00:10:23.689 "product_name": "Malloc disk", 00:10:23.689 "block_size": 512, 00:10:23.689 "num_blocks": 65536, 00:10:23.689 "uuid": "25b709af-705a-4c52-98f3-754b37735bb2", 00:10:23.689 "assigned_rate_limits": { 00:10:23.689 "rw_ios_per_sec": 0, 00:10:23.689 "rw_mbytes_per_sec": 0, 00:10:23.689 "r_mbytes_per_sec": 0, 00:10:23.689 "w_mbytes_per_sec": 0 00:10:23.689 }, 00:10:23.689 "claimed": true, 00:10:23.689 "claim_type": "exclusive_write", 00:10:23.689 "zoned": false, 00:10:23.689 "supported_io_types": { 00:10:23.689 "read": true, 00:10:23.689 "write": true, 00:10:23.689 "unmap": true, 00:10:23.689 "flush": true, 00:10:23.689 "reset": true, 00:10:23.689 "nvme_admin": false, 00:10:23.689 "nvme_io": false, 00:10:23.689 "nvme_io_md": false, 00:10:23.689 "write_zeroes": true, 00:10:23.689 "zcopy": true, 00:10:23.689 "get_zone_info": false, 00:10:23.689 "zone_management": false, 00:10:23.689 "zone_append": false, 00:10:23.689 "compare": false, 00:10:23.689 "compare_and_write": false, 00:10:23.689 "abort": true, 00:10:23.689 "seek_hole": false, 00:10:23.689 "seek_data": false, 00:10:23.689 "copy": true, 00:10:23.689 "nvme_iov_md": false 00:10:23.689 }, 00:10:23.689 "memory_domains": [ 00:10:23.689 { 00:10:23.689 "dma_device_id": "system", 00:10:23.689 "dma_device_type": 1 00:10:23.689 }, 00:10:23.689 { 00:10:23.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.689 "dma_device_type": 2 00:10:23.689 } 00:10:23.689 ], 00:10:23.689 "driver_specific": {} 00:10:23.689 } 00:10:23.689 ] 00:10:23.689 04:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.689 04:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:23.689 04:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:23.689 04:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.689 04:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:23.689 04:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:23.689 04:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:23.689 04:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:23.689 04:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.689 04:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.689 04:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.689 04:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.689 04:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.689 04:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.689 04:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.689 04:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.689 04:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.689 04:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.689 "name": "Existed_Raid", 00:10:23.689 "uuid": "454ce3b2-ff58-4e1f-806f-dcafc4e35a82", 00:10:23.689 "strip_size_kb": 64, 00:10:23.689 "state": "online", 00:10:23.689 "raid_level": "concat", 00:10:23.689 "superblock": false, 00:10:23.689 "num_base_bdevs": 4, 00:10:23.689 "num_base_bdevs_discovered": 4, 00:10:23.689 "num_base_bdevs_operational": 4, 00:10:23.689 "base_bdevs_list": [ 00:10:23.689 { 00:10:23.689 "name": "NewBaseBdev", 00:10:23.689 "uuid": "25b709af-705a-4c52-98f3-754b37735bb2", 00:10:23.689 "is_configured": true, 00:10:23.689 "data_offset": 0, 00:10:23.689 "data_size": 65536 00:10:23.689 }, 00:10:23.689 { 00:10:23.689 "name": "BaseBdev2", 00:10:23.689 "uuid": "ebf87063-5b83-4bf6-a141-2ccd9f8ab26d", 00:10:23.689 "is_configured": true, 00:10:23.689 "data_offset": 0, 00:10:23.689 "data_size": 65536 00:10:23.689 }, 00:10:23.689 { 00:10:23.689 "name": "BaseBdev3", 00:10:23.689 "uuid": "4bcf97da-a205-45d2-8774-61de07bcb61f", 00:10:23.689 "is_configured": true, 00:10:23.689 "data_offset": 0, 00:10:23.689 "data_size": 65536 00:10:23.689 }, 00:10:23.689 { 00:10:23.689 "name": "BaseBdev4", 00:10:23.689 "uuid": "c3344eff-82e8-4bf2-85b8-e6989f501c49", 00:10:23.689 "is_configured": true, 00:10:23.689 "data_offset": 0, 00:10:23.689 "data_size": 65536 00:10:23.689 } 00:10:23.689 ] 00:10:23.689 }' 00:10:23.689 04:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.689 04:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.257 04:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:24.257 04:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:24.257 04:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:24.257 04:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:24.257 04:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:24.257 04:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:24.257 04:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:24.257 04:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:24.257 04:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.257 04:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.257 [2024-12-13 04:26:23.986112] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:24.257 04:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.257 04:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:24.257 "name": "Existed_Raid", 00:10:24.257 "aliases": [ 00:10:24.257 "454ce3b2-ff58-4e1f-806f-dcafc4e35a82" 00:10:24.257 ], 00:10:24.257 "product_name": "Raid Volume", 00:10:24.257 "block_size": 512, 00:10:24.257 "num_blocks": 262144, 00:10:24.257 "uuid": "454ce3b2-ff58-4e1f-806f-dcafc4e35a82", 00:10:24.257 "assigned_rate_limits": { 00:10:24.257 "rw_ios_per_sec": 0, 00:10:24.257 "rw_mbytes_per_sec": 0, 00:10:24.257 "r_mbytes_per_sec": 0, 00:10:24.257 "w_mbytes_per_sec": 0 00:10:24.257 }, 00:10:24.257 "claimed": false, 00:10:24.257 "zoned": false, 00:10:24.257 "supported_io_types": { 00:10:24.257 "read": true, 00:10:24.257 "write": true, 00:10:24.257 "unmap": true, 00:10:24.257 "flush": true, 00:10:24.257 "reset": true, 00:10:24.257 "nvme_admin": false, 00:10:24.257 "nvme_io": false, 00:10:24.257 "nvme_io_md": false, 00:10:24.257 "write_zeroes": true, 00:10:24.257 "zcopy": false, 00:10:24.257 "get_zone_info": false, 00:10:24.257 "zone_management": false, 00:10:24.257 "zone_append": false, 00:10:24.257 "compare": false, 00:10:24.257 "compare_and_write": false, 00:10:24.257 "abort": false, 00:10:24.257 "seek_hole": false, 00:10:24.257 "seek_data": false, 00:10:24.258 "copy": false, 00:10:24.258 "nvme_iov_md": false 00:10:24.258 }, 00:10:24.258 "memory_domains": [ 00:10:24.258 { 00:10:24.258 "dma_device_id": "system", 00:10:24.258 "dma_device_type": 1 00:10:24.258 }, 00:10:24.258 { 00:10:24.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.258 "dma_device_type": 2 00:10:24.258 }, 00:10:24.258 { 00:10:24.258 "dma_device_id": "system", 00:10:24.258 "dma_device_type": 1 00:10:24.258 }, 00:10:24.258 { 00:10:24.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.258 "dma_device_type": 2 00:10:24.258 }, 00:10:24.258 { 00:10:24.258 "dma_device_id": "system", 00:10:24.258 "dma_device_type": 1 00:10:24.258 }, 00:10:24.258 { 00:10:24.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.258 "dma_device_type": 2 00:10:24.258 }, 00:10:24.258 { 00:10:24.258 "dma_device_id": "system", 00:10:24.258 "dma_device_type": 1 00:10:24.258 }, 00:10:24.258 { 00:10:24.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.258 "dma_device_type": 2 00:10:24.258 } 00:10:24.258 ], 00:10:24.258 "driver_specific": { 00:10:24.258 "raid": { 00:10:24.258 "uuid": "454ce3b2-ff58-4e1f-806f-dcafc4e35a82", 00:10:24.258 "strip_size_kb": 64, 00:10:24.258 "state": "online", 00:10:24.258 "raid_level": "concat", 00:10:24.258 "superblock": false, 00:10:24.258 "num_base_bdevs": 4, 00:10:24.258 "num_base_bdevs_discovered": 4, 00:10:24.258 "num_base_bdevs_operational": 4, 00:10:24.258 "base_bdevs_list": [ 00:10:24.258 { 00:10:24.258 "name": "NewBaseBdev", 00:10:24.258 "uuid": "25b709af-705a-4c52-98f3-754b37735bb2", 00:10:24.258 "is_configured": true, 00:10:24.258 "data_offset": 0, 00:10:24.258 "data_size": 65536 00:10:24.258 }, 00:10:24.258 { 00:10:24.258 "name": "BaseBdev2", 00:10:24.258 "uuid": "ebf87063-5b83-4bf6-a141-2ccd9f8ab26d", 00:10:24.258 "is_configured": true, 00:10:24.258 "data_offset": 0, 00:10:24.258 "data_size": 65536 00:10:24.258 }, 00:10:24.258 { 00:10:24.258 "name": "BaseBdev3", 00:10:24.258 "uuid": "4bcf97da-a205-45d2-8774-61de07bcb61f", 00:10:24.258 "is_configured": true, 00:10:24.258 "data_offset": 0, 00:10:24.258 "data_size": 65536 00:10:24.258 }, 00:10:24.258 { 00:10:24.258 "name": "BaseBdev4", 00:10:24.258 "uuid": "c3344eff-82e8-4bf2-85b8-e6989f501c49", 00:10:24.258 "is_configured": true, 00:10:24.258 "data_offset": 0, 00:10:24.258 "data_size": 65536 00:10:24.258 } 00:10:24.258 ] 00:10:24.258 } 00:10:24.258 } 00:10:24.258 }' 00:10:24.258 04:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:24.258 04:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:24.258 BaseBdev2 00:10:24.258 BaseBdev3 00:10:24.258 BaseBdev4' 00:10:24.258 04:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.258 04:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:24.258 04:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:24.258 04:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.258 04:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:24.258 04:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.258 04:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.258 04:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.258 04:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:24.258 04:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:24.258 04:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:24.258 04:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:24.258 04:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.258 04:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.258 04:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.258 04:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.258 04:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:24.258 04:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:24.258 04:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:24.258 04:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:24.258 04:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.258 04:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.258 04:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.258 04:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.258 04:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:24.258 04:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:24.258 04:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:24.258 04:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:24.258 04:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.258 04:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.258 04:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.258 04:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.258 04:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:24.258 04:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:24.258 04:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:24.258 04:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.258 04:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.258 [2024-12-13 04:26:24.269332] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:24.258 [2024-12-13 04:26:24.269364] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:24.258 [2024-12-13 04:26:24.269439] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:24.258 [2024-12-13 04:26:24.269523] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:24.258 [2024-12-13 04:26:24.269548] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:10:24.517 04:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.517 04:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83850 00:10:24.517 04:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 83850 ']' 00:10:24.517 04:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 83850 00:10:24.517 04:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:24.517 04:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:24.517 04:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83850 00:10:24.517 killing process with pid 83850 00:10:24.517 04:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:24.517 04:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:24.517 04:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83850' 00:10:24.517 04:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 83850 00:10:24.517 [2024-12-13 04:26:24.316530] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:24.517 04:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 83850 00:10:24.517 [2024-12-13 04:26:24.390695] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:24.776 04:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:24.776 00:10:24.776 real 0m9.712s 00:10:24.776 user 0m16.291s 00:10:24.776 sys 0m2.124s 00:10:24.776 04:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:24.776 04:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.776 ************************************ 00:10:24.776 END TEST raid_state_function_test 00:10:24.776 ************************************ 00:10:24.776 04:26:24 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:10:24.776 04:26:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:24.776 04:26:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:24.776 04:26:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:25.036 ************************************ 00:10:25.036 START TEST raid_state_function_test_sb 00:10:25.036 ************************************ 00:10:25.036 04:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:10:25.036 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:25.036 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:25.036 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:25.036 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:25.036 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:25.036 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.036 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:25.036 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:25.036 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.036 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:25.036 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:25.036 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.036 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:25.036 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:25.036 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.036 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:25.036 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:25.036 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.036 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:25.036 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:25.036 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:25.036 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:25.036 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:25.036 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:25.036 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:25.036 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:25.036 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:25.036 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:25.036 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:25.036 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=84506 00:10:25.036 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:25.036 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84506' 00:10:25.036 Process raid pid: 84506 00:10:25.036 04:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 84506 00:10:25.036 04:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 84506 ']' 00:10:25.036 04:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.036 04:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:25.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.036 04:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.036 04:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:25.036 04:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.036 [2024-12-13 04:26:24.887776] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:10:25.036 [2024-12-13 04:26:24.887924] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:25.036 [2024-12-13 04:26:25.040303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.294 [2024-12-13 04:26:25.078163] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.294 [2024-12-13 04:26:25.153243] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:25.294 [2024-12-13 04:26:25.153296] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:25.860 04:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:25.860 04:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:25.860 04:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:25.860 04:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.860 04:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.860 [2024-12-13 04:26:25.718748] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:25.861 [2024-12-13 04:26:25.718817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:25.861 [2024-12-13 04:26:25.718835] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:25.861 [2024-12-13 04:26:25.718848] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:25.861 [2024-12-13 04:26:25.718854] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:25.861 [2024-12-13 04:26:25.718867] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:25.861 [2024-12-13 04:26:25.718872] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:25.861 [2024-12-13 04:26:25.718882] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:25.861 04:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.861 04:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:25.861 04:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.861 04:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.861 04:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:25.861 04:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.861 04:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:25.861 04:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.861 04:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.861 04:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.861 04:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.861 04:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.861 04:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.861 04:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.861 04:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.861 04:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.861 04:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.861 "name": "Existed_Raid", 00:10:25.861 "uuid": "d0f09b73-a06b-4878-a56d-d259438d385b", 00:10:25.861 "strip_size_kb": 64, 00:10:25.861 "state": "configuring", 00:10:25.861 "raid_level": "concat", 00:10:25.861 "superblock": true, 00:10:25.861 "num_base_bdevs": 4, 00:10:25.861 "num_base_bdevs_discovered": 0, 00:10:25.861 "num_base_bdevs_operational": 4, 00:10:25.861 "base_bdevs_list": [ 00:10:25.861 { 00:10:25.861 "name": "BaseBdev1", 00:10:25.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.861 "is_configured": false, 00:10:25.861 "data_offset": 0, 00:10:25.861 "data_size": 0 00:10:25.861 }, 00:10:25.861 { 00:10:25.861 "name": "BaseBdev2", 00:10:25.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.861 "is_configured": false, 00:10:25.861 "data_offset": 0, 00:10:25.861 "data_size": 0 00:10:25.861 }, 00:10:25.861 { 00:10:25.861 "name": "BaseBdev3", 00:10:25.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.861 "is_configured": false, 00:10:25.861 "data_offset": 0, 00:10:25.861 "data_size": 0 00:10:25.861 }, 00:10:25.861 { 00:10:25.861 "name": "BaseBdev4", 00:10:25.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.861 "is_configured": false, 00:10:25.861 "data_offset": 0, 00:10:25.861 "data_size": 0 00:10:25.861 } 00:10:25.861 ] 00:10:25.861 }' 00:10:25.861 04:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.861 04:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.430 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:26.430 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.430 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.430 [2024-12-13 04:26:26.197820] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:26.430 [2024-12-13 04:26:26.197866] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:10:26.430 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.430 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:26.430 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.430 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.430 [2024-12-13 04:26:26.209848] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:26.430 [2024-12-13 04:26:26.209890] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:26.430 [2024-12-13 04:26:26.209898] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:26.430 [2024-12-13 04:26:26.209907] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:26.430 [2024-12-13 04:26:26.209913] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:26.430 [2024-12-13 04:26:26.209922] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:26.430 [2024-12-13 04:26:26.209927] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:26.430 [2024-12-13 04:26:26.209936] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:26.430 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.430 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:26.430 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.430 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.430 [2024-12-13 04:26:26.236922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:26.430 BaseBdev1 00:10:26.430 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.430 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:26.430 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:26.430 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:26.430 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:26.430 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:26.430 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:26.430 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:26.430 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.430 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.430 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.430 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:26.430 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.430 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.430 [ 00:10:26.430 { 00:10:26.430 "name": "BaseBdev1", 00:10:26.430 "aliases": [ 00:10:26.430 "7743a7bd-70c1-4486-a9b4-e8b6d8e37703" 00:10:26.430 ], 00:10:26.430 "product_name": "Malloc disk", 00:10:26.430 "block_size": 512, 00:10:26.430 "num_blocks": 65536, 00:10:26.430 "uuid": "7743a7bd-70c1-4486-a9b4-e8b6d8e37703", 00:10:26.430 "assigned_rate_limits": { 00:10:26.430 "rw_ios_per_sec": 0, 00:10:26.430 "rw_mbytes_per_sec": 0, 00:10:26.430 "r_mbytes_per_sec": 0, 00:10:26.430 "w_mbytes_per_sec": 0 00:10:26.430 }, 00:10:26.430 "claimed": true, 00:10:26.430 "claim_type": "exclusive_write", 00:10:26.430 "zoned": false, 00:10:26.431 "supported_io_types": { 00:10:26.431 "read": true, 00:10:26.431 "write": true, 00:10:26.431 "unmap": true, 00:10:26.431 "flush": true, 00:10:26.431 "reset": true, 00:10:26.431 "nvme_admin": false, 00:10:26.431 "nvme_io": false, 00:10:26.431 "nvme_io_md": false, 00:10:26.431 "write_zeroes": true, 00:10:26.431 "zcopy": true, 00:10:26.431 "get_zone_info": false, 00:10:26.431 "zone_management": false, 00:10:26.431 "zone_append": false, 00:10:26.431 "compare": false, 00:10:26.431 "compare_and_write": false, 00:10:26.431 "abort": true, 00:10:26.431 "seek_hole": false, 00:10:26.431 "seek_data": false, 00:10:26.431 "copy": true, 00:10:26.431 "nvme_iov_md": false 00:10:26.431 }, 00:10:26.431 "memory_domains": [ 00:10:26.431 { 00:10:26.431 "dma_device_id": "system", 00:10:26.431 "dma_device_type": 1 00:10:26.431 }, 00:10:26.431 { 00:10:26.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.431 "dma_device_type": 2 00:10:26.431 } 00:10:26.431 ], 00:10:26.431 "driver_specific": {} 00:10:26.431 } 00:10:26.431 ] 00:10:26.431 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.431 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:26.431 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:26.431 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.431 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.431 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:26.431 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.431 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.431 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.431 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.431 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.431 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.431 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.431 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.431 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.431 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.431 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.431 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.431 "name": "Existed_Raid", 00:10:26.431 "uuid": "e69ea2b2-7a24-4d0d-91b6-b523aec745c8", 00:10:26.431 "strip_size_kb": 64, 00:10:26.431 "state": "configuring", 00:10:26.431 "raid_level": "concat", 00:10:26.431 "superblock": true, 00:10:26.431 "num_base_bdevs": 4, 00:10:26.431 "num_base_bdevs_discovered": 1, 00:10:26.431 "num_base_bdevs_operational": 4, 00:10:26.431 "base_bdevs_list": [ 00:10:26.431 { 00:10:26.431 "name": "BaseBdev1", 00:10:26.431 "uuid": "7743a7bd-70c1-4486-a9b4-e8b6d8e37703", 00:10:26.431 "is_configured": true, 00:10:26.431 "data_offset": 2048, 00:10:26.431 "data_size": 63488 00:10:26.431 }, 00:10:26.431 { 00:10:26.431 "name": "BaseBdev2", 00:10:26.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.431 "is_configured": false, 00:10:26.431 "data_offset": 0, 00:10:26.431 "data_size": 0 00:10:26.431 }, 00:10:26.431 { 00:10:26.431 "name": "BaseBdev3", 00:10:26.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.431 "is_configured": false, 00:10:26.431 "data_offset": 0, 00:10:26.431 "data_size": 0 00:10:26.431 }, 00:10:26.431 { 00:10:26.431 "name": "BaseBdev4", 00:10:26.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.431 "is_configured": false, 00:10:26.431 "data_offset": 0, 00:10:26.431 "data_size": 0 00:10:26.431 } 00:10:26.431 ] 00:10:26.431 }' 00:10:26.431 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.431 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.999 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:26.999 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.999 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.999 [2024-12-13 04:26:26.744113] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:26.999 [2024-12-13 04:26:26.744180] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:10:26.999 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.999 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:26.999 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.999 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.999 [2024-12-13 04:26:26.752133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:26.999 [2024-12-13 04:26:26.754388] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:26.999 [2024-12-13 04:26:26.754477] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:26.999 [2024-12-13 04:26:26.754507] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:26.999 [2024-12-13 04:26:26.754530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:26.999 [2024-12-13 04:26:26.754548] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:26.999 [2024-12-13 04:26:26.754567] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:26.999 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.999 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:26.999 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:26.999 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:26.999 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.999 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.999 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:26.999 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.999 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.999 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.999 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.999 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.999 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.999 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.999 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.999 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.999 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.999 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.999 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.999 "name": "Existed_Raid", 00:10:26.999 "uuid": "c951d3d8-ef24-4c61-bf91-96ed60e93e36", 00:10:26.999 "strip_size_kb": 64, 00:10:26.999 "state": "configuring", 00:10:26.999 "raid_level": "concat", 00:10:26.999 "superblock": true, 00:10:26.999 "num_base_bdevs": 4, 00:10:26.999 "num_base_bdevs_discovered": 1, 00:10:26.999 "num_base_bdevs_operational": 4, 00:10:26.999 "base_bdevs_list": [ 00:10:26.999 { 00:10:26.999 "name": "BaseBdev1", 00:10:26.999 "uuid": "7743a7bd-70c1-4486-a9b4-e8b6d8e37703", 00:10:26.999 "is_configured": true, 00:10:26.999 "data_offset": 2048, 00:10:26.999 "data_size": 63488 00:10:26.999 }, 00:10:26.999 { 00:10:26.999 "name": "BaseBdev2", 00:10:26.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.999 "is_configured": false, 00:10:26.999 "data_offset": 0, 00:10:26.999 "data_size": 0 00:10:26.999 }, 00:10:26.999 { 00:10:26.999 "name": "BaseBdev3", 00:10:26.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.999 "is_configured": false, 00:10:26.999 "data_offset": 0, 00:10:26.999 "data_size": 0 00:10:26.999 }, 00:10:26.999 { 00:10:26.999 "name": "BaseBdev4", 00:10:26.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.999 "is_configured": false, 00:10:26.999 "data_offset": 0, 00:10:26.999 "data_size": 0 00:10:26.999 } 00:10:26.999 ] 00:10:26.999 }' 00:10:26.999 04:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.999 04:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.258 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:27.258 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.258 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.258 [2024-12-13 04:26:27.240012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:27.258 BaseBdev2 00:10:27.258 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.258 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:27.258 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:27.258 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:27.258 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:27.258 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:27.258 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:27.258 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:27.258 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.258 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.258 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.258 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:27.258 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.258 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.258 [ 00:10:27.258 { 00:10:27.258 "name": "BaseBdev2", 00:10:27.258 "aliases": [ 00:10:27.258 "dda2957c-42c4-45ca-9d29-18f8163a95af" 00:10:27.258 ], 00:10:27.258 "product_name": "Malloc disk", 00:10:27.258 "block_size": 512, 00:10:27.258 "num_blocks": 65536, 00:10:27.258 "uuid": "dda2957c-42c4-45ca-9d29-18f8163a95af", 00:10:27.258 "assigned_rate_limits": { 00:10:27.258 "rw_ios_per_sec": 0, 00:10:27.258 "rw_mbytes_per_sec": 0, 00:10:27.258 "r_mbytes_per_sec": 0, 00:10:27.258 "w_mbytes_per_sec": 0 00:10:27.258 }, 00:10:27.258 "claimed": true, 00:10:27.259 "claim_type": "exclusive_write", 00:10:27.259 "zoned": false, 00:10:27.259 "supported_io_types": { 00:10:27.259 "read": true, 00:10:27.259 "write": true, 00:10:27.259 "unmap": true, 00:10:27.259 "flush": true, 00:10:27.259 "reset": true, 00:10:27.518 "nvme_admin": false, 00:10:27.518 "nvme_io": false, 00:10:27.518 "nvme_io_md": false, 00:10:27.518 "write_zeroes": true, 00:10:27.518 "zcopy": true, 00:10:27.518 "get_zone_info": false, 00:10:27.518 "zone_management": false, 00:10:27.518 "zone_append": false, 00:10:27.518 "compare": false, 00:10:27.518 "compare_and_write": false, 00:10:27.518 "abort": true, 00:10:27.518 "seek_hole": false, 00:10:27.518 "seek_data": false, 00:10:27.518 "copy": true, 00:10:27.518 "nvme_iov_md": false 00:10:27.518 }, 00:10:27.518 "memory_domains": [ 00:10:27.518 { 00:10:27.518 "dma_device_id": "system", 00:10:27.518 "dma_device_type": 1 00:10:27.518 }, 00:10:27.518 { 00:10:27.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.518 "dma_device_type": 2 00:10:27.518 } 00:10:27.518 ], 00:10:27.518 "driver_specific": {} 00:10:27.518 } 00:10:27.518 ] 00:10:27.518 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.518 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:27.518 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:27.518 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:27.518 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:27.518 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.518 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.518 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:27.518 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.518 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:27.518 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.518 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.518 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.518 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.518 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.518 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.518 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.518 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.518 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.518 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.518 "name": "Existed_Raid", 00:10:27.518 "uuid": "c951d3d8-ef24-4c61-bf91-96ed60e93e36", 00:10:27.518 "strip_size_kb": 64, 00:10:27.518 "state": "configuring", 00:10:27.518 "raid_level": "concat", 00:10:27.518 "superblock": true, 00:10:27.518 "num_base_bdevs": 4, 00:10:27.518 "num_base_bdevs_discovered": 2, 00:10:27.518 "num_base_bdevs_operational": 4, 00:10:27.518 "base_bdevs_list": [ 00:10:27.518 { 00:10:27.518 "name": "BaseBdev1", 00:10:27.518 "uuid": "7743a7bd-70c1-4486-a9b4-e8b6d8e37703", 00:10:27.518 "is_configured": true, 00:10:27.518 "data_offset": 2048, 00:10:27.518 "data_size": 63488 00:10:27.518 }, 00:10:27.518 { 00:10:27.518 "name": "BaseBdev2", 00:10:27.518 "uuid": "dda2957c-42c4-45ca-9d29-18f8163a95af", 00:10:27.518 "is_configured": true, 00:10:27.518 "data_offset": 2048, 00:10:27.518 "data_size": 63488 00:10:27.518 }, 00:10:27.518 { 00:10:27.518 "name": "BaseBdev3", 00:10:27.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.518 "is_configured": false, 00:10:27.518 "data_offset": 0, 00:10:27.518 "data_size": 0 00:10:27.518 }, 00:10:27.518 { 00:10:27.518 "name": "BaseBdev4", 00:10:27.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.518 "is_configured": false, 00:10:27.518 "data_offset": 0, 00:10:27.518 "data_size": 0 00:10:27.518 } 00:10:27.518 ] 00:10:27.518 }' 00:10:27.518 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.518 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.777 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:27.777 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.777 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.777 [2024-12-13 04:26:27.767069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:27.777 BaseBdev3 00:10:27.777 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.777 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:27.777 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:27.777 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:27.777 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:27.777 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:27.777 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:27.778 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:27.778 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.778 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.778 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.778 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:27.778 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.778 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.037 [ 00:10:28.037 { 00:10:28.037 "name": "BaseBdev3", 00:10:28.037 "aliases": [ 00:10:28.037 "6c61d253-cf39-4072-bdaf-95f3593e8e64" 00:10:28.037 ], 00:10:28.037 "product_name": "Malloc disk", 00:10:28.037 "block_size": 512, 00:10:28.037 "num_blocks": 65536, 00:10:28.037 "uuid": "6c61d253-cf39-4072-bdaf-95f3593e8e64", 00:10:28.037 "assigned_rate_limits": { 00:10:28.037 "rw_ios_per_sec": 0, 00:10:28.037 "rw_mbytes_per_sec": 0, 00:10:28.037 "r_mbytes_per_sec": 0, 00:10:28.037 "w_mbytes_per_sec": 0 00:10:28.037 }, 00:10:28.037 "claimed": true, 00:10:28.037 "claim_type": "exclusive_write", 00:10:28.037 "zoned": false, 00:10:28.037 "supported_io_types": { 00:10:28.037 "read": true, 00:10:28.037 "write": true, 00:10:28.037 "unmap": true, 00:10:28.037 "flush": true, 00:10:28.037 "reset": true, 00:10:28.037 "nvme_admin": false, 00:10:28.037 "nvme_io": false, 00:10:28.037 "nvme_io_md": false, 00:10:28.037 "write_zeroes": true, 00:10:28.037 "zcopy": true, 00:10:28.037 "get_zone_info": false, 00:10:28.037 "zone_management": false, 00:10:28.037 "zone_append": false, 00:10:28.037 "compare": false, 00:10:28.037 "compare_and_write": false, 00:10:28.037 "abort": true, 00:10:28.037 "seek_hole": false, 00:10:28.037 "seek_data": false, 00:10:28.037 "copy": true, 00:10:28.037 "nvme_iov_md": false 00:10:28.037 }, 00:10:28.037 "memory_domains": [ 00:10:28.037 { 00:10:28.037 "dma_device_id": "system", 00:10:28.037 "dma_device_type": 1 00:10:28.037 }, 00:10:28.037 { 00:10:28.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.037 "dma_device_type": 2 00:10:28.037 } 00:10:28.037 ], 00:10:28.037 "driver_specific": {} 00:10:28.037 } 00:10:28.037 ] 00:10:28.037 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.037 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:28.037 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:28.037 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:28.037 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:28.037 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.037 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.037 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:28.037 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.037 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:28.037 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.037 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.037 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.037 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.037 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.037 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.037 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.037 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.037 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.037 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.037 "name": "Existed_Raid", 00:10:28.037 "uuid": "c951d3d8-ef24-4c61-bf91-96ed60e93e36", 00:10:28.037 "strip_size_kb": 64, 00:10:28.037 "state": "configuring", 00:10:28.037 "raid_level": "concat", 00:10:28.037 "superblock": true, 00:10:28.037 "num_base_bdevs": 4, 00:10:28.037 "num_base_bdevs_discovered": 3, 00:10:28.037 "num_base_bdevs_operational": 4, 00:10:28.037 "base_bdevs_list": [ 00:10:28.037 { 00:10:28.037 "name": "BaseBdev1", 00:10:28.037 "uuid": "7743a7bd-70c1-4486-a9b4-e8b6d8e37703", 00:10:28.037 "is_configured": true, 00:10:28.037 "data_offset": 2048, 00:10:28.037 "data_size": 63488 00:10:28.037 }, 00:10:28.037 { 00:10:28.037 "name": "BaseBdev2", 00:10:28.037 "uuid": "dda2957c-42c4-45ca-9d29-18f8163a95af", 00:10:28.037 "is_configured": true, 00:10:28.037 "data_offset": 2048, 00:10:28.037 "data_size": 63488 00:10:28.037 }, 00:10:28.037 { 00:10:28.037 "name": "BaseBdev3", 00:10:28.037 "uuid": "6c61d253-cf39-4072-bdaf-95f3593e8e64", 00:10:28.037 "is_configured": true, 00:10:28.037 "data_offset": 2048, 00:10:28.037 "data_size": 63488 00:10:28.037 }, 00:10:28.037 { 00:10:28.037 "name": "BaseBdev4", 00:10:28.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.037 "is_configured": false, 00:10:28.037 "data_offset": 0, 00:10:28.037 "data_size": 0 00:10:28.037 } 00:10:28.037 ] 00:10:28.037 }' 00:10:28.037 04:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.037 04:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.296 04:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:28.296 04:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.296 04:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.296 [2024-12-13 04:26:28.294881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:28.296 [2024-12-13 04:26:28.295110] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:10:28.296 [2024-12-13 04:26:28.295126] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:28.296 [2024-12-13 04:26:28.295489] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:28.296 [2024-12-13 04:26:28.295642] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:10:28.296 [2024-12-13 04:26:28.295656] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:10:28.296 [2024-12-13 04:26:28.295803] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:28.296 BaseBdev4 00:10:28.296 04:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.296 04:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:28.296 04:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:28.296 04:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:28.296 04:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:28.296 04:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:28.296 04:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:28.296 04:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:28.296 04:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.296 04:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.296 04:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.296 04:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:28.296 04:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.296 04:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.555 [ 00:10:28.555 { 00:10:28.555 "name": "BaseBdev4", 00:10:28.555 "aliases": [ 00:10:28.555 "ae02d18a-2823-43b8-a120-4207f1579f62" 00:10:28.555 ], 00:10:28.555 "product_name": "Malloc disk", 00:10:28.555 "block_size": 512, 00:10:28.555 "num_blocks": 65536, 00:10:28.555 "uuid": "ae02d18a-2823-43b8-a120-4207f1579f62", 00:10:28.555 "assigned_rate_limits": { 00:10:28.555 "rw_ios_per_sec": 0, 00:10:28.555 "rw_mbytes_per_sec": 0, 00:10:28.555 "r_mbytes_per_sec": 0, 00:10:28.555 "w_mbytes_per_sec": 0 00:10:28.555 }, 00:10:28.555 "claimed": true, 00:10:28.555 "claim_type": "exclusive_write", 00:10:28.555 "zoned": false, 00:10:28.556 "supported_io_types": { 00:10:28.556 "read": true, 00:10:28.556 "write": true, 00:10:28.556 "unmap": true, 00:10:28.556 "flush": true, 00:10:28.556 "reset": true, 00:10:28.556 "nvme_admin": false, 00:10:28.556 "nvme_io": false, 00:10:28.556 "nvme_io_md": false, 00:10:28.556 "write_zeroes": true, 00:10:28.556 "zcopy": true, 00:10:28.556 "get_zone_info": false, 00:10:28.556 "zone_management": false, 00:10:28.556 "zone_append": false, 00:10:28.556 "compare": false, 00:10:28.556 "compare_and_write": false, 00:10:28.556 "abort": true, 00:10:28.556 "seek_hole": false, 00:10:28.556 "seek_data": false, 00:10:28.556 "copy": true, 00:10:28.556 "nvme_iov_md": false 00:10:28.556 }, 00:10:28.556 "memory_domains": [ 00:10:28.556 { 00:10:28.556 "dma_device_id": "system", 00:10:28.556 "dma_device_type": 1 00:10:28.556 }, 00:10:28.556 { 00:10:28.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.556 "dma_device_type": 2 00:10:28.556 } 00:10:28.556 ], 00:10:28.556 "driver_specific": {} 00:10:28.556 } 00:10:28.556 ] 00:10:28.556 04:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.556 04:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:28.556 04:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:28.556 04:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:28.556 04:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:28.556 04:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.556 04:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:28.556 04:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:28.556 04:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.556 04:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:28.556 04:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.556 04:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.556 04:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.556 04:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.556 04:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.556 04:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.556 04:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.556 04:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.556 04:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.556 04:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.556 "name": "Existed_Raid", 00:10:28.556 "uuid": "c951d3d8-ef24-4c61-bf91-96ed60e93e36", 00:10:28.556 "strip_size_kb": 64, 00:10:28.556 "state": "online", 00:10:28.556 "raid_level": "concat", 00:10:28.556 "superblock": true, 00:10:28.556 "num_base_bdevs": 4, 00:10:28.556 "num_base_bdevs_discovered": 4, 00:10:28.556 "num_base_bdevs_operational": 4, 00:10:28.556 "base_bdevs_list": [ 00:10:28.556 { 00:10:28.556 "name": "BaseBdev1", 00:10:28.556 "uuid": "7743a7bd-70c1-4486-a9b4-e8b6d8e37703", 00:10:28.556 "is_configured": true, 00:10:28.556 "data_offset": 2048, 00:10:28.556 "data_size": 63488 00:10:28.556 }, 00:10:28.556 { 00:10:28.556 "name": "BaseBdev2", 00:10:28.556 "uuid": "dda2957c-42c4-45ca-9d29-18f8163a95af", 00:10:28.556 "is_configured": true, 00:10:28.556 "data_offset": 2048, 00:10:28.556 "data_size": 63488 00:10:28.556 }, 00:10:28.556 { 00:10:28.556 "name": "BaseBdev3", 00:10:28.556 "uuid": "6c61d253-cf39-4072-bdaf-95f3593e8e64", 00:10:28.556 "is_configured": true, 00:10:28.556 "data_offset": 2048, 00:10:28.556 "data_size": 63488 00:10:28.556 }, 00:10:28.556 { 00:10:28.556 "name": "BaseBdev4", 00:10:28.556 "uuid": "ae02d18a-2823-43b8-a120-4207f1579f62", 00:10:28.556 "is_configured": true, 00:10:28.556 "data_offset": 2048, 00:10:28.556 "data_size": 63488 00:10:28.556 } 00:10:28.556 ] 00:10:28.556 }' 00:10:28.556 04:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.556 04:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.815 04:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:28.815 04:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:28.815 04:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:28.815 04:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:28.815 04:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:28.815 04:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:28.815 04:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:28.815 04:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:28.815 04:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.815 04:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.815 [2024-12-13 04:26:28.814330] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:29.074 04:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.074 04:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:29.075 "name": "Existed_Raid", 00:10:29.075 "aliases": [ 00:10:29.075 "c951d3d8-ef24-4c61-bf91-96ed60e93e36" 00:10:29.075 ], 00:10:29.075 "product_name": "Raid Volume", 00:10:29.075 "block_size": 512, 00:10:29.075 "num_blocks": 253952, 00:10:29.075 "uuid": "c951d3d8-ef24-4c61-bf91-96ed60e93e36", 00:10:29.075 "assigned_rate_limits": { 00:10:29.075 "rw_ios_per_sec": 0, 00:10:29.075 "rw_mbytes_per_sec": 0, 00:10:29.075 "r_mbytes_per_sec": 0, 00:10:29.075 "w_mbytes_per_sec": 0 00:10:29.075 }, 00:10:29.075 "claimed": false, 00:10:29.075 "zoned": false, 00:10:29.075 "supported_io_types": { 00:10:29.075 "read": true, 00:10:29.075 "write": true, 00:10:29.075 "unmap": true, 00:10:29.075 "flush": true, 00:10:29.075 "reset": true, 00:10:29.075 "nvme_admin": false, 00:10:29.075 "nvme_io": false, 00:10:29.075 "nvme_io_md": false, 00:10:29.075 "write_zeroes": true, 00:10:29.075 "zcopy": false, 00:10:29.075 "get_zone_info": false, 00:10:29.075 "zone_management": false, 00:10:29.075 "zone_append": false, 00:10:29.075 "compare": false, 00:10:29.075 "compare_and_write": false, 00:10:29.075 "abort": false, 00:10:29.075 "seek_hole": false, 00:10:29.075 "seek_data": false, 00:10:29.075 "copy": false, 00:10:29.075 "nvme_iov_md": false 00:10:29.075 }, 00:10:29.075 "memory_domains": [ 00:10:29.075 { 00:10:29.075 "dma_device_id": "system", 00:10:29.075 "dma_device_type": 1 00:10:29.075 }, 00:10:29.075 { 00:10:29.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.075 "dma_device_type": 2 00:10:29.075 }, 00:10:29.075 { 00:10:29.075 "dma_device_id": "system", 00:10:29.075 "dma_device_type": 1 00:10:29.075 }, 00:10:29.075 { 00:10:29.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.075 "dma_device_type": 2 00:10:29.075 }, 00:10:29.075 { 00:10:29.075 "dma_device_id": "system", 00:10:29.075 "dma_device_type": 1 00:10:29.075 }, 00:10:29.075 { 00:10:29.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.075 "dma_device_type": 2 00:10:29.075 }, 00:10:29.075 { 00:10:29.075 "dma_device_id": "system", 00:10:29.075 "dma_device_type": 1 00:10:29.075 }, 00:10:29.075 { 00:10:29.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.075 "dma_device_type": 2 00:10:29.075 } 00:10:29.075 ], 00:10:29.075 "driver_specific": { 00:10:29.075 "raid": { 00:10:29.075 "uuid": "c951d3d8-ef24-4c61-bf91-96ed60e93e36", 00:10:29.075 "strip_size_kb": 64, 00:10:29.075 "state": "online", 00:10:29.075 "raid_level": "concat", 00:10:29.075 "superblock": true, 00:10:29.075 "num_base_bdevs": 4, 00:10:29.075 "num_base_bdevs_discovered": 4, 00:10:29.075 "num_base_bdevs_operational": 4, 00:10:29.075 "base_bdevs_list": [ 00:10:29.075 { 00:10:29.075 "name": "BaseBdev1", 00:10:29.075 "uuid": "7743a7bd-70c1-4486-a9b4-e8b6d8e37703", 00:10:29.075 "is_configured": true, 00:10:29.075 "data_offset": 2048, 00:10:29.075 "data_size": 63488 00:10:29.075 }, 00:10:29.075 { 00:10:29.075 "name": "BaseBdev2", 00:10:29.075 "uuid": "dda2957c-42c4-45ca-9d29-18f8163a95af", 00:10:29.075 "is_configured": true, 00:10:29.075 "data_offset": 2048, 00:10:29.075 "data_size": 63488 00:10:29.075 }, 00:10:29.075 { 00:10:29.075 "name": "BaseBdev3", 00:10:29.075 "uuid": "6c61d253-cf39-4072-bdaf-95f3593e8e64", 00:10:29.075 "is_configured": true, 00:10:29.075 "data_offset": 2048, 00:10:29.075 "data_size": 63488 00:10:29.075 }, 00:10:29.075 { 00:10:29.075 "name": "BaseBdev4", 00:10:29.075 "uuid": "ae02d18a-2823-43b8-a120-4207f1579f62", 00:10:29.075 "is_configured": true, 00:10:29.075 "data_offset": 2048, 00:10:29.075 "data_size": 63488 00:10:29.075 } 00:10:29.075 ] 00:10:29.075 } 00:10:29.075 } 00:10:29.075 }' 00:10:29.075 04:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:29.075 04:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:29.075 BaseBdev2 00:10:29.075 BaseBdev3 00:10:29.075 BaseBdev4' 00:10:29.075 04:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.075 04:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:29.075 04:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.075 04:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:29.075 04:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.075 04:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.075 04:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.075 04:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.075 04:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.075 04:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.075 04:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.075 04:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:29.075 04:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.075 04:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.075 04:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.075 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.075 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.075 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.076 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.076 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:29.076 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.076 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.076 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.076 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.335 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.335 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.335 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.335 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:29.335 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.335 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.335 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.335 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.335 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.335 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.335 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:29.335 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.335 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.335 [2024-12-13 04:26:29.149508] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:29.335 [2024-12-13 04:26:29.149583] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:29.335 [2024-12-13 04:26:29.149683] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:29.335 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.335 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:29.335 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:29.335 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:29.336 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:29.336 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:29.336 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:29.336 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.336 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:29.336 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:29.336 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.336 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:29.336 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.336 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.336 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.336 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.336 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.336 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.336 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.336 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.336 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.336 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.336 "name": "Existed_Raid", 00:10:29.336 "uuid": "c951d3d8-ef24-4c61-bf91-96ed60e93e36", 00:10:29.336 "strip_size_kb": 64, 00:10:29.336 "state": "offline", 00:10:29.336 "raid_level": "concat", 00:10:29.336 "superblock": true, 00:10:29.336 "num_base_bdevs": 4, 00:10:29.336 "num_base_bdevs_discovered": 3, 00:10:29.336 "num_base_bdevs_operational": 3, 00:10:29.336 "base_bdevs_list": [ 00:10:29.336 { 00:10:29.336 "name": null, 00:10:29.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.336 "is_configured": false, 00:10:29.336 "data_offset": 0, 00:10:29.336 "data_size": 63488 00:10:29.336 }, 00:10:29.336 { 00:10:29.336 "name": "BaseBdev2", 00:10:29.336 "uuid": "dda2957c-42c4-45ca-9d29-18f8163a95af", 00:10:29.336 "is_configured": true, 00:10:29.336 "data_offset": 2048, 00:10:29.336 "data_size": 63488 00:10:29.336 }, 00:10:29.336 { 00:10:29.336 "name": "BaseBdev3", 00:10:29.336 "uuid": "6c61d253-cf39-4072-bdaf-95f3593e8e64", 00:10:29.336 "is_configured": true, 00:10:29.336 "data_offset": 2048, 00:10:29.336 "data_size": 63488 00:10:29.336 }, 00:10:29.336 { 00:10:29.336 "name": "BaseBdev4", 00:10:29.336 "uuid": "ae02d18a-2823-43b8-a120-4207f1579f62", 00:10:29.336 "is_configured": true, 00:10:29.336 "data_offset": 2048, 00:10:29.336 "data_size": 63488 00:10:29.336 } 00:10:29.336 ] 00:10:29.336 }' 00:10:29.336 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.336 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.904 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:29.904 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:29.904 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:29.904 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.904 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.904 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.904 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.904 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:29.904 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:29.904 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:29.904 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.904 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.904 [2024-12-13 04:26:29.685340] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:29.904 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.904 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:29.904 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:29.904 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.904 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.904 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.904 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:29.904 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.904 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:29.904 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:29.904 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:29.904 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.904 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.904 [2024-12-13 04:26:29.765697] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:29.904 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.904 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:29.904 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:29.904 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:29.904 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.904 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.904 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.904 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.904 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:29.904 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:29.904 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:29.904 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.904 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.904 [2024-12-13 04:26:29.845803] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:29.904 [2024-12-13 04:26:29.845854] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:10:29.904 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.904 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:29.904 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:29.904 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:29.904 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.904 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.904 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.904 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.904 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:29.904 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:29.904 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:30.163 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:30.163 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:30.163 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:30.163 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.163 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.163 BaseBdev2 00:10:30.163 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.163 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:30.163 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:30.163 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:30.163 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:30.163 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:30.163 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:30.163 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:30.163 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.163 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.163 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.163 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:30.163 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.163 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.163 [ 00:10:30.163 { 00:10:30.163 "name": "BaseBdev2", 00:10:30.163 "aliases": [ 00:10:30.163 "388a6ccf-3165-4a29-9367-d4749f132df2" 00:10:30.163 ], 00:10:30.163 "product_name": "Malloc disk", 00:10:30.163 "block_size": 512, 00:10:30.163 "num_blocks": 65536, 00:10:30.163 "uuid": "388a6ccf-3165-4a29-9367-d4749f132df2", 00:10:30.163 "assigned_rate_limits": { 00:10:30.163 "rw_ios_per_sec": 0, 00:10:30.163 "rw_mbytes_per_sec": 0, 00:10:30.163 "r_mbytes_per_sec": 0, 00:10:30.163 "w_mbytes_per_sec": 0 00:10:30.163 }, 00:10:30.163 "claimed": false, 00:10:30.163 "zoned": false, 00:10:30.163 "supported_io_types": { 00:10:30.163 "read": true, 00:10:30.163 "write": true, 00:10:30.163 "unmap": true, 00:10:30.163 "flush": true, 00:10:30.163 "reset": true, 00:10:30.163 "nvme_admin": false, 00:10:30.163 "nvme_io": false, 00:10:30.163 "nvme_io_md": false, 00:10:30.163 "write_zeroes": true, 00:10:30.163 "zcopy": true, 00:10:30.163 "get_zone_info": false, 00:10:30.163 "zone_management": false, 00:10:30.163 "zone_append": false, 00:10:30.163 "compare": false, 00:10:30.163 "compare_and_write": false, 00:10:30.163 "abort": true, 00:10:30.163 "seek_hole": false, 00:10:30.163 "seek_data": false, 00:10:30.163 "copy": true, 00:10:30.163 "nvme_iov_md": false 00:10:30.163 }, 00:10:30.163 "memory_domains": [ 00:10:30.163 { 00:10:30.163 "dma_device_id": "system", 00:10:30.163 "dma_device_type": 1 00:10:30.163 }, 00:10:30.163 { 00:10:30.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.163 "dma_device_type": 2 00:10:30.163 } 00:10:30.163 ], 00:10:30.163 "driver_specific": {} 00:10:30.163 } 00:10:30.163 ] 00:10:30.163 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.163 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:30.163 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:30.163 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:30.163 04:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:30.163 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.163 04:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.163 BaseBdev3 00:10:30.163 04:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.163 04:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:30.163 04:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:30.163 04:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.164 [ 00:10:30.164 { 00:10:30.164 "name": "BaseBdev3", 00:10:30.164 "aliases": [ 00:10:30.164 "f2ed500f-535c-4fc3-ad28-86f5c9e2b767" 00:10:30.164 ], 00:10:30.164 "product_name": "Malloc disk", 00:10:30.164 "block_size": 512, 00:10:30.164 "num_blocks": 65536, 00:10:30.164 "uuid": "f2ed500f-535c-4fc3-ad28-86f5c9e2b767", 00:10:30.164 "assigned_rate_limits": { 00:10:30.164 "rw_ios_per_sec": 0, 00:10:30.164 "rw_mbytes_per_sec": 0, 00:10:30.164 "r_mbytes_per_sec": 0, 00:10:30.164 "w_mbytes_per_sec": 0 00:10:30.164 }, 00:10:30.164 "claimed": false, 00:10:30.164 "zoned": false, 00:10:30.164 "supported_io_types": { 00:10:30.164 "read": true, 00:10:30.164 "write": true, 00:10:30.164 "unmap": true, 00:10:30.164 "flush": true, 00:10:30.164 "reset": true, 00:10:30.164 "nvme_admin": false, 00:10:30.164 "nvme_io": false, 00:10:30.164 "nvme_io_md": false, 00:10:30.164 "write_zeroes": true, 00:10:30.164 "zcopy": true, 00:10:30.164 "get_zone_info": false, 00:10:30.164 "zone_management": false, 00:10:30.164 "zone_append": false, 00:10:30.164 "compare": false, 00:10:30.164 "compare_and_write": false, 00:10:30.164 "abort": true, 00:10:30.164 "seek_hole": false, 00:10:30.164 "seek_data": false, 00:10:30.164 "copy": true, 00:10:30.164 "nvme_iov_md": false 00:10:30.164 }, 00:10:30.164 "memory_domains": [ 00:10:30.164 { 00:10:30.164 "dma_device_id": "system", 00:10:30.164 "dma_device_type": 1 00:10:30.164 }, 00:10:30.164 { 00:10:30.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.164 "dma_device_type": 2 00:10:30.164 } 00:10:30.164 ], 00:10:30.164 "driver_specific": {} 00:10:30.164 } 00:10:30.164 ] 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.164 BaseBdev4 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.164 [ 00:10:30.164 { 00:10:30.164 "name": "BaseBdev4", 00:10:30.164 "aliases": [ 00:10:30.164 "8e3949b2-3084-4c36-984d-73cda5eb95b5" 00:10:30.164 ], 00:10:30.164 "product_name": "Malloc disk", 00:10:30.164 "block_size": 512, 00:10:30.164 "num_blocks": 65536, 00:10:30.164 "uuid": "8e3949b2-3084-4c36-984d-73cda5eb95b5", 00:10:30.164 "assigned_rate_limits": { 00:10:30.164 "rw_ios_per_sec": 0, 00:10:30.164 "rw_mbytes_per_sec": 0, 00:10:30.164 "r_mbytes_per_sec": 0, 00:10:30.164 "w_mbytes_per_sec": 0 00:10:30.164 }, 00:10:30.164 "claimed": false, 00:10:30.164 "zoned": false, 00:10:30.164 "supported_io_types": { 00:10:30.164 "read": true, 00:10:30.164 "write": true, 00:10:30.164 "unmap": true, 00:10:30.164 "flush": true, 00:10:30.164 "reset": true, 00:10:30.164 "nvme_admin": false, 00:10:30.164 "nvme_io": false, 00:10:30.164 "nvme_io_md": false, 00:10:30.164 "write_zeroes": true, 00:10:30.164 "zcopy": true, 00:10:30.164 "get_zone_info": false, 00:10:30.164 "zone_management": false, 00:10:30.164 "zone_append": false, 00:10:30.164 "compare": false, 00:10:30.164 "compare_and_write": false, 00:10:30.164 "abort": true, 00:10:30.164 "seek_hole": false, 00:10:30.164 "seek_data": false, 00:10:30.164 "copy": true, 00:10:30.164 "nvme_iov_md": false 00:10:30.164 }, 00:10:30.164 "memory_domains": [ 00:10:30.164 { 00:10:30.164 "dma_device_id": "system", 00:10:30.164 "dma_device_type": 1 00:10:30.164 }, 00:10:30.164 { 00:10:30.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.164 "dma_device_type": 2 00:10:30.164 } 00:10:30.164 ], 00:10:30.164 "driver_specific": {} 00:10:30.164 } 00:10:30.164 ] 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.164 [2024-12-13 04:26:30.106375] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:30.164 [2024-12-13 04:26:30.106472] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:30.164 [2024-12-13 04:26:30.106553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:30.164 [2024-12-13 04:26:30.108673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:30.164 [2024-12-13 04:26:30.108760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.164 04:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.164 "name": "Existed_Raid", 00:10:30.164 "uuid": "42a274d8-3d2b-4935-92f6-5791f08fb047", 00:10:30.164 "strip_size_kb": 64, 00:10:30.164 "state": "configuring", 00:10:30.165 "raid_level": "concat", 00:10:30.165 "superblock": true, 00:10:30.165 "num_base_bdevs": 4, 00:10:30.165 "num_base_bdevs_discovered": 3, 00:10:30.165 "num_base_bdevs_operational": 4, 00:10:30.165 "base_bdevs_list": [ 00:10:30.165 { 00:10:30.165 "name": "BaseBdev1", 00:10:30.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.165 "is_configured": false, 00:10:30.165 "data_offset": 0, 00:10:30.165 "data_size": 0 00:10:30.165 }, 00:10:30.165 { 00:10:30.165 "name": "BaseBdev2", 00:10:30.165 "uuid": "388a6ccf-3165-4a29-9367-d4749f132df2", 00:10:30.165 "is_configured": true, 00:10:30.165 "data_offset": 2048, 00:10:30.165 "data_size": 63488 00:10:30.165 }, 00:10:30.165 { 00:10:30.165 "name": "BaseBdev3", 00:10:30.165 "uuid": "f2ed500f-535c-4fc3-ad28-86f5c9e2b767", 00:10:30.165 "is_configured": true, 00:10:30.165 "data_offset": 2048, 00:10:30.165 "data_size": 63488 00:10:30.165 }, 00:10:30.165 { 00:10:30.165 "name": "BaseBdev4", 00:10:30.165 "uuid": "8e3949b2-3084-4c36-984d-73cda5eb95b5", 00:10:30.165 "is_configured": true, 00:10:30.165 "data_offset": 2048, 00:10:30.165 "data_size": 63488 00:10:30.165 } 00:10:30.165 ] 00:10:30.165 }' 00:10:30.165 04:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.165 04:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.733 04:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:30.733 04:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.733 04:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.733 [2024-12-13 04:26:30.573550] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:30.733 04:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.733 04:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:30.733 04:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.733 04:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.733 04:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:30.733 04:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.733 04:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.733 04:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.733 04:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.733 04:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.733 04:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.733 04:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.733 04:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.733 04:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.733 04:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.733 04:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.733 04:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.733 "name": "Existed_Raid", 00:10:30.733 "uuid": "42a274d8-3d2b-4935-92f6-5791f08fb047", 00:10:30.733 "strip_size_kb": 64, 00:10:30.733 "state": "configuring", 00:10:30.733 "raid_level": "concat", 00:10:30.733 "superblock": true, 00:10:30.733 "num_base_bdevs": 4, 00:10:30.733 "num_base_bdevs_discovered": 2, 00:10:30.733 "num_base_bdevs_operational": 4, 00:10:30.733 "base_bdevs_list": [ 00:10:30.733 { 00:10:30.733 "name": "BaseBdev1", 00:10:30.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.733 "is_configured": false, 00:10:30.733 "data_offset": 0, 00:10:30.733 "data_size": 0 00:10:30.733 }, 00:10:30.733 { 00:10:30.733 "name": null, 00:10:30.733 "uuid": "388a6ccf-3165-4a29-9367-d4749f132df2", 00:10:30.733 "is_configured": false, 00:10:30.733 "data_offset": 0, 00:10:30.733 "data_size": 63488 00:10:30.733 }, 00:10:30.733 { 00:10:30.733 "name": "BaseBdev3", 00:10:30.733 "uuid": "f2ed500f-535c-4fc3-ad28-86f5c9e2b767", 00:10:30.733 "is_configured": true, 00:10:30.733 "data_offset": 2048, 00:10:30.733 "data_size": 63488 00:10:30.733 }, 00:10:30.733 { 00:10:30.733 "name": "BaseBdev4", 00:10:30.733 "uuid": "8e3949b2-3084-4c36-984d-73cda5eb95b5", 00:10:30.733 "is_configured": true, 00:10:30.733 "data_offset": 2048, 00:10:30.733 "data_size": 63488 00:10:30.733 } 00:10:30.733 ] 00:10:30.733 }' 00:10:30.733 04:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.733 04:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.309 04:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.309 04:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.309 04:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.309 04:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:31.309 04:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.309 04:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:31.309 04:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:31.309 04:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.309 04:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.309 [2024-12-13 04:26:31.093340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:31.309 BaseBdev1 00:10:31.309 04:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.309 04:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:31.309 04:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:31.309 04:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:31.309 04:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:31.309 04:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:31.309 04:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:31.309 04:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:31.309 04:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.309 04:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.309 04:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.309 04:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:31.309 04:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.309 04:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.309 [ 00:10:31.309 { 00:10:31.309 "name": "BaseBdev1", 00:10:31.309 "aliases": [ 00:10:31.309 "a34ae3d1-7e74-4b4d-a7dc-031c7e167101" 00:10:31.309 ], 00:10:31.309 "product_name": "Malloc disk", 00:10:31.309 "block_size": 512, 00:10:31.309 "num_blocks": 65536, 00:10:31.309 "uuid": "a34ae3d1-7e74-4b4d-a7dc-031c7e167101", 00:10:31.309 "assigned_rate_limits": { 00:10:31.309 "rw_ios_per_sec": 0, 00:10:31.309 "rw_mbytes_per_sec": 0, 00:10:31.309 "r_mbytes_per_sec": 0, 00:10:31.309 "w_mbytes_per_sec": 0 00:10:31.309 }, 00:10:31.309 "claimed": true, 00:10:31.309 "claim_type": "exclusive_write", 00:10:31.309 "zoned": false, 00:10:31.309 "supported_io_types": { 00:10:31.309 "read": true, 00:10:31.310 "write": true, 00:10:31.310 "unmap": true, 00:10:31.310 "flush": true, 00:10:31.310 "reset": true, 00:10:31.310 "nvme_admin": false, 00:10:31.310 "nvme_io": false, 00:10:31.310 "nvme_io_md": false, 00:10:31.310 "write_zeroes": true, 00:10:31.310 "zcopy": true, 00:10:31.310 "get_zone_info": false, 00:10:31.310 "zone_management": false, 00:10:31.310 "zone_append": false, 00:10:31.310 "compare": false, 00:10:31.310 "compare_and_write": false, 00:10:31.310 "abort": true, 00:10:31.310 "seek_hole": false, 00:10:31.310 "seek_data": false, 00:10:31.310 "copy": true, 00:10:31.310 "nvme_iov_md": false 00:10:31.310 }, 00:10:31.310 "memory_domains": [ 00:10:31.310 { 00:10:31.310 "dma_device_id": "system", 00:10:31.310 "dma_device_type": 1 00:10:31.310 }, 00:10:31.310 { 00:10:31.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.310 "dma_device_type": 2 00:10:31.310 } 00:10:31.310 ], 00:10:31.310 "driver_specific": {} 00:10:31.310 } 00:10:31.310 ] 00:10:31.310 04:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.310 04:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:31.310 04:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:31.310 04:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.310 04:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.310 04:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:31.310 04:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.310 04:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.310 04:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.310 04:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.310 04:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.310 04:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.310 04:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.310 04:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.310 04:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.310 04:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.310 04:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.310 04:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.310 "name": "Existed_Raid", 00:10:31.310 "uuid": "42a274d8-3d2b-4935-92f6-5791f08fb047", 00:10:31.310 "strip_size_kb": 64, 00:10:31.310 "state": "configuring", 00:10:31.310 "raid_level": "concat", 00:10:31.310 "superblock": true, 00:10:31.310 "num_base_bdevs": 4, 00:10:31.310 "num_base_bdevs_discovered": 3, 00:10:31.310 "num_base_bdevs_operational": 4, 00:10:31.310 "base_bdevs_list": [ 00:10:31.310 { 00:10:31.310 "name": "BaseBdev1", 00:10:31.310 "uuid": "a34ae3d1-7e74-4b4d-a7dc-031c7e167101", 00:10:31.310 "is_configured": true, 00:10:31.310 "data_offset": 2048, 00:10:31.310 "data_size": 63488 00:10:31.310 }, 00:10:31.310 { 00:10:31.310 "name": null, 00:10:31.310 "uuid": "388a6ccf-3165-4a29-9367-d4749f132df2", 00:10:31.310 "is_configured": false, 00:10:31.310 "data_offset": 0, 00:10:31.310 "data_size": 63488 00:10:31.310 }, 00:10:31.310 { 00:10:31.310 "name": "BaseBdev3", 00:10:31.310 "uuid": "f2ed500f-535c-4fc3-ad28-86f5c9e2b767", 00:10:31.310 "is_configured": true, 00:10:31.310 "data_offset": 2048, 00:10:31.310 "data_size": 63488 00:10:31.310 }, 00:10:31.310 { 00:10:31.310 "name": "BaseBdev4", 00:10:31.310 "uuid": "8e3949b2-3084-4c36-984d-73cda5eb95b5", 00:10:31.310 "is_configured": true, 00:10:31.310 "data_offset": 2048, 00:10:31.310 "data_size": 63488 00:10:31.310 } 00:10:31.310 ] 00:10:31.310 }' 00:10:31.310 04:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.310 04:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.909 04:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.909 04:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:31.909 04:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.909 04:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.909 04:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.909 04:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:31.909 04:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:31.909 04:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.909 04:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.909 [2024-12-13 04:26:31.656556] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:31.909 04:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.909 04:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:31.909 04:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.909 04:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.909 04:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:31.909 04:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.909 04:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.909 04:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.909 04:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.909 04:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.909 04:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.909 04:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.909 04:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.909 04:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.909 04:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.909 04:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.909 04:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.909 "name": "Existed_Raid", 00:10:31.909 "uuid": "42a274d8-3d2b-4935-92f6-5791f08fb047", 00:10:31.909 "strip_size_kb": 64, 00:10:31.909 "state": "configuring", 00:10:31.909 "raid_level": "concat", 00:10:31.909 "superblock": true, 00:10:31.909 "num_base_bdevs": 4, 00:10:31.909 "num_base_bdevs_discovered": 2, 00:10:31.909 "num_base_bdevs_operational": 4, 00:10:31.909 "base_bdevs_list": [ 00:10:31.909 { 00:10:31.909 "name": "BaseBdev1", 00:10:31.909 "uuid": "a34ae3d1-7e74-4b4d-a7dc-031c7e167101", 00:10:31.909 "is_configured": true, 00:10:31.909 "data_offset": 2048, 00:10:31.909 "data_size": 63488 00:10:31.909 }, 00:10:31.909 { 00:10:31.909 "name": null, 00:10:31.909 "uuid": "388a6ccf-3165-4a29-9367-d4749f132df2", 00:10:31.909 "is_configured": false, 00:10:31.909 "data_offset": 0, 00:10:31.909 "data_size": 63488 00:10:31.909 }, 00:10:31.909 { 00:10:31.909 "name": null, 00:10:31.909 "uuid": "f2ed500f-535c-4fc3-ad28-86f5c9e2b767", 00:10:31.909 "is_configured": false, 00:10:31.909 "data_offset": 0, 00:10:31.909 "data_size": 63488 00:10:31.909 }, 00:10:31.909 { 00:10:31.909 "name": "BaseBdev4", 00:10:31.909 "uuid": "8e3949b2-3084-4c36-984d-73cda5eb95b5", 00:10:31.909 "is_configured": true, 00:10:31.909 "data_offset": 2048, 00:10:31.909 "data_size": 63488 00:10:31.909 } 00:10:31.909 ] 00:10:31.909 }' 00:10:31.909 04:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.909 04:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.169 04:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:32.169 04:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.169 04:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.169 04:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.169 04:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.169 04:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:32.169 04:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:32.169 04:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.169 04:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.169 [2024-12-13 04:26:32.156597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:32.169 04:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.169 04:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:32.169 04:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.169 04:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.169 04:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:32.169 04:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.169 04:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.169 04:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.169 04:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.169 04:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.169 04:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.169 04:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.169 04:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.169 04:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.169 04:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.428 04:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.428 04:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.428 "name": "Existed_Raid", 00:10:32.428 "uuid": "42a274d8-3d2b-4935-92f6-5791f08fb047", 00:10:32.428 "strip_size_kb": 64, 00:10:32.428 "state": "configuring", 00:10:32.428 "raid_level": "concat", 00:10:32.428 "superblock": true, 00:10:32.428 "num_base_bdevs": 4, 00:10:32.428 "num_base_bdevs_discovered": 3, 00:10:32.428 "num_base_bdevs_operational": 4, 00:10:32.428 "base_bdevs_list": [ 00:10:32.428 { 00:10:32.428 "name": "BaseBdev1", 00:10:32.428 "uuid": "a34ae3d1-7e74-4b4d-a7dc-031c7e167101", 00:10:32.428 "is_configured": true, 00:10:32.428 "data_offset": 2048, 00:10:32.428 "data_size": 63488 00:10:32.428 }, 00:10:32.428 { 00:10:32.428 "name": null, 00:10:32.428 "uuid": "388a6ccf-3165-4a29-9367-d4749f132df2", 00:10:32.428 "is_configured": false, 00:10:32.428 "data_offset": 0, 00:10:32.428 "data_size": 63488 00:10:32.428 }, 00:10:32.428 { 00:10:32.428 "name": "BaseBdev3", 00:10:32.428 "uuid": "f2ed500f-535c-4fc3-ad28-86f5c9e2b767", 00:10:32.428 "is_configured": true, 00:10:32.428 "data_offset": 2048, 00:10:32.428 "data_size": 63488 00:10:32.428 }, 00:10:32.428 { 00:10:32.428 "name": "BaseBdev4", 00:10:32.428 "uuid": "8e3949b2-3084-4c36-984d-73cda5eb95b5", 00:10:32.429 "is_configured": true, 00:10:32.429 "data_offset": 2048, 00:10:32.429 "data_size": 63488 00:10:32.429 } 00:10:32.429 ] 00:10:32.429 }' 00:10:32.429 04:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.429 04:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.688 04:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.688 04:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.688 04:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:32.688 04:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.688 04:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.688 04:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:32.688 04:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:32.688 04:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.688 04:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.948 [2024-12-13 04:26:32.704653] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:32.948 04:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.948 04:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:32.948 04:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.948 04:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.948 04:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:32.948 04:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.948 04:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.948 04:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.948 04:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.948 04:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.948 04:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.948 04:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.948 04:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.948 04:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.948 04:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.948 04:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.948 04:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.948 "name": "Existed_Raid", 00:10:32.948 "uuid": "42a274d8-3d2b-4935-92f6-5791f08fb047", 00:10:32.948 "strip_size_kb": 64, 00:10:32.948 "state": "configuring", 00:10:32.948 "raid_level": "concat", 00:10:32.948 "superblock": true, 00:10:32.948 "num_base_bdevs": 4, 00:10:32.948 "num_base_bdevs_discovered": 2, 00:10:32.948 "num_base_bdevs_operational": 4, 00:10:32.948 "base_bdevs_list": [ 00:10:32.948 { 00:10:32.948 "name": null, 00:10:32.948 "uuid": "a34ae3d1-7e74-4b4d-a7dc-031c7e167101", 00:10:32.948 "is_configured": false, 00:10:32.948 "data_offset": 0, 00:10:32.948 "data_size": 63488 00:10:32.948 }, 00:10:32.948 { 00:10:32.948 "name": null, 00:10:32.948 "uuid": "388a6ccf-3165-4a29-9367-d4749f132df2", 00:10:32.948 "is_configured": false, 00:10:32.948 "data_offset": 0, 00:10:32.948 "data_size": 63488 00:10:32.948 }, 00:10:32.948 { 00:10:32.948 "name": "BaseBdev3", 00:10:32.948 "uuid": "f2ed500f-535c-4fc3-ad28-86f5c9e2b767", 00:10:32.948 "is_configured": true, 00:10:32.948 "data_offset": 2048, 00:10:32.948 "data_size": 63488 00:10:32.948 }, 00:10:32.948 { 00:10:32.948 "name": "BaseBdev4", 00:10:32.948 "uuid": "8e3949b2-3084-4c36-984d-73cda5eb95b5", 00:10:32.948 "is_configured": true, 00:10:32.948 "data_offset": 2048, 00:10:32.948 "data_size": 63488 00:10:32.948 } 00:10:32.948 ] 00:10:32.948 }' 00:10:32.948 04:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.948 04:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.208 04:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.208 04:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.208 04:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.208 04:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:33.208 04:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.208 04:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:33.208 04:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:33.208 04:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.208 04:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.208 [2024-12-13 04:26:33.163677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:33.208 04:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.208 04:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:33.208 04:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.208 04:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.208 04:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:33.208 04:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.208 04:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.208 04:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.208 04:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.208 04:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.208 04:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.208 04:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.208 04:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.208 04:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.208 04:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.208 04:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.208 04:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.208 "name": "Existed_Raid", 00:10:33.208 "uuid": "42a274d8-3d2b-4935-92f6-5791f08fb047", 00:10:33.208 "strip_size_kb": 64, 00:10:33.208 "state": "configuring", 00:10:33.208 "raid_level": "concat", 00:10:33.208 "superblock": true, 00:10:33.208 "num_base_bdevs": 4, 00:10:33.208 "num_base_bdevs_discovered": 3, 00:10:33.208 "num_base_bdevs_operational": 4, 00:10:33.208 "base_bdevs_list": [ 00:10:33.208 { 00:10:33.208 "name": null, 00:10:33.208 "uuid": "a34ae3d1-7e74-4b4d-a7dc-031c7e167101", 00:10:33.208 "is_configured": false, 00:10:33.208 "data_offset": 0, 00:10:33.208 "data_size": 63488 00:10:33.208 }, 00:10:33.208 { 00:10:33.208 "name": "BaseBdev2", 00:10:33.208 "uuid": "388a6ccf-3165-4a29-9367-d4749f132df2", 00:10:33.208 "is_configured": true, 00:10:33.208 "data_offset": 2048, 00:10:33.208 "data_size": 63488 00:10:33.208 }, 00:10:33.208 { 00:10:33.208 "name": "BaseBdev3", 00:10:33.208 "uuid": "f2ed500f-535c-4fc3-ad28-86f5c9e2b767", 00:10:33.208 "is_configured": true, 00:10:33.208 "data_offset": 2048, 00:10:33.208 "data_size": 63488 00:10:33.208 }, 00:10:33.208 { 00:10:33.208 "name": "BaseBdev4", 00:10:33.208 "uuid": "8e3949b2-3084-4c36-984d-73cda5eb95b5", 00:10:33.208 "is_configured": true, 00:10:33.208 "data_offset": 2048, 00:10:33.208 "data_size": 63488 00:10:33.208 } 00:10:33.208 ] 00:10:33.208 }' 00:10:33.208 04:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.208 04:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.777 04:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.777 04:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:33.777 04:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.777 04:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.777 04:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.777 04:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:33.777 04:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.777 04:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.777 04:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:33.777 04:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.777 04:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.777 04:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a34ae3d1-7e74-4b4d-a7dc-031c7e167101 00:10:33.777 04:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.777 04:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.777 [2024-12-13 04:26:33.699469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:33.777 [2024-12-13 04:26:33.699733] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:10:33.778 [2024-12-13 04:26:33.699780] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:33.778 [2024-12-13 04:26:33.700107] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:10:33.778 NewBaseBdev 00:10:33.778 [2024-12-13 04:26:33.700263] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:10:33.778 [2024-12-13 04:26:33.700277] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:10:33.778 [2024-12-13 04:26:33.700383] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:33.778 04:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.778 04:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:33.778 04:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:33.778 04:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:33.778 04:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:33.778 04:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:33.778 04:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:33.778 04:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:33.778 04:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.778 04:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.778 04:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.778 04:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:33.778 04:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.778 04:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.778 [ 00:10:33.778 { 00:10:33.778 "name": "NewBaseBdev", 00:10:33.778 "aliases": [ 00:10:33.778 "a34ae3d1-7e74-4b4d-a7dc-031c7e167101" 00:10:33.778 ], 00:10:33.778 "product_name": "Malloc disk", 00:10:33.778 "block_size": 512, 00:10:33.778 "num_blocks": 65536, 00:10:33.778 "uuid": "a34ae3d1-7e74-4b4d-a7dc-031c7e167101", 00:10:33.778 "assigned_rate_limits": { 00:10:33.778 "rw_ios_per_sec": 0, 00:10:33.778 "rw_mbytes_per_sec": 0, 00:10:33.778 "r_mbytes_per_sec": 0, 00:10:33.778 "w_mbytes_per_sec": 0 00:10:33.778 }, 00:10:33.778 "claimed": true, 00:10:33.778 "claim_type": "exclusive_write", 00:10:33.778 "zoned": false, 00:10:33.778 "supported_io_types": { 00:10:33.778 "read": true, 00:10:33.778 "write": true, 00:10:33.778 "unmap": true, 00:10:33.778 "flush": true, 00:10:33.778 "reset": true, 00:10:33.778 "nvme_admin": false, 00:10:33.778 "nvme_io": false, 00:10:33.778 "nvme_io_md": false, 00:10:33.778 "write_zeroes": true, 00:10:33.778 "zcopy": true, 00:10:33.778 "get_zone_info": false, 00:10:33.778 "zone_management": false, 00:10:33.778 "zone_append": false, 00:10:33.778 "compare": false, 00:10:33.778 "compare_and_write": false, 00:10:33.778 "abort": true, 00:10:33.778 "seek_hole": false, 00:10:33.778 "seek_data": false, 00:10:33.778 "copy": true, 00:10:33.778 "nvme_iov_md": false 00:10:33.778 }, 00:10:33.778 "memory_domains": [ 00:10:33.778 { 00:10:33.778 "dma_device_id": "system", 00:10:33.778 "dma_device_type": 1 00:10:33.778 }, 00:10:33.778 { 00:10:33.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.778 "dma_device_type": 2 00:10:33.778 } 00:10:33.778 ], 00:10:33.778 "driver_specific": {} 00:10:33.778 } 00:10:33.778 ] 00:10:33.778 04:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.778 04:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:33.778 04:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:33.778 04:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.778 04:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:33.778 04:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:33.778 04:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.778 04:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.778 04:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.778 04:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.778 04:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.778 04:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.778 04:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.778 04:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.778 04:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.778 04:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.778 04:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.778 04:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.778 "name": "Existed_Raid", 00:10:33.778 "uuid": "42a274d8-3d2b-4935-92f6-5791f08fb047", 00:10:33.778 "strip_size_kb": 64, 00:10:33.778 "state": "online", 00:10:33.778 "raid_level": "concat", 00:10:33.778 "superblock": true, 00:10:33.778 "num_base_bdevs": 4, 00:10:33.778 "num_base_bdevs_discovered": 4, 00:10:33.778 "num_base_bdevs_operational": 4, 00:10:33.778 "base_bdevs_list": [ 00:10:33.778 { 00:10:33.778 "name": "NewBaseBdev", 00:10:33.778 "uuid": "a34ae3d1-7e74-4b4d-a7dc-031c7e167101", 00:10:33.778 "is_configured": true, 00:10:33.778 "data_offset": 2048, 00:10:33.778 "data_size": 63488 00:10:33.778 }, 00:10:33.778 { 00:10:33.778 "name": "BaseBdev2", 00:10:33.778 "uuid": "388a6ccf-3165-4a29-9367-d4749f132df2", 00:10:33.778 "is_configured": true, 00:10:33.778 "data_offset": 2048, 00:10:33.778 "data_size": 63488 00:10:33.778 }, 00:10:33.778 { 00:10:33.778 "name": "BaseBdev3", 00:10:33.778 "uuid": "f2ed500f-535c-4fc3-ad28-86f5c9e2b767", 00:10:33.778 "is_configured": true, 00:10:33.778 "data_offset": 2048, 00:10:33.778 "data_size": 63488 00:10:33.778 }, 00:10:33.778 { 00:10:33.778 "name": "BaseBdev4", 00:10:33.778 "uuid": "8e3949b2-3084-4c36-984d-73cda5eb95b5", 00:10:33.778 "is_configured": true, 00:10:33.778 "data_offset": 2048, 00:10:33.778 "data_size": 63488 00:10:33.778 } 00:10:33.778 ] 00:10:33.778 }' 00:10:33.778 04:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.778 04:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.348 04:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:34.348 04:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:34.348 04:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:34.348 04:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:34.348 04:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:34.348 04:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:34.348 04:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:34.348 04:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.348 04:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.348 04:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:34.348 [2024-12-13 04:26:34.210902] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:34.348 04:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.348 04:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:34.348 "name": "Existed_Raid", 00:10:34.348 "aliases": [ 00:10:34.348 "42a274d8-3d2b-4935-92f6-5791f08fb047" 00:10:34.348 ], 00:10:34.348 "product_name": "Raid Volume", 00:10:34.348 "block_size": 512, 00:10:34.348 "num_blocks": 253952, 00:10:34.348 "uuid": "42a274d8-3d2b-4935-92f6-5791f08fb047", 00:10:34.348 "assigned_rate_limits": { 00:10:34.348 "rw_ios_per_sec": 0, 00:10:34.348 "rw_mbytes_per_sec": 0, 00:10:34.348 "r_mbytes_per_sec": 0, 00:10:34.348 "w_mbytes_per_sec": 0 00:10:34.348 }, 00:10:34.348 "claimed": false, 00:10:34.348 "zoned": false, 00:10:34.348 "supported_io_types": { 00:10:34.348 "read": true, 00:10:34.348 "write": true, 00:10:34.348 "unmap": true, 00:10:34.348 "flush": true, 00:10:34.348 "reset": true, 00:10:34.348 "nvme_admin": false, 00:10:34.348 "nvme_io": false, 00:10:34.348 "nvme_io_md": false, 00:10:34.348 "write_zeroes": true, 00:10:34.348 "zcopy": false, 00:10:34.348 "get_zone_info": false, 00:10:34.348 "zone_management": false, 00:10:34.348 "zone_append": false, 00:10:34.348 "compare": false, 00:10:34.348 "compare_and_write": false, 00:10:34.348 "abort": false, 00:10:34.348 "seek_hole": false, 00:10:34.348 "seek_data": false, 00:10:34.348 "copy": false, 00:10:34.348 "nvme_iov_md": false 00:10:34.348 }, 00:10:34.348 "memory_domains": [ 00:10:34.348 { 00:10:34.348 "dma_device_id": "system", 00:10:34.348 "dma_device_type": 1 00:10:34.348 }, 00:10:34.348 { 00:10:34.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.348 "dma_device_type": 2 00:10:34.348 }, 00:10:34.348 { 00:10:34.348 "dma_device_id": "system", 00:10:34.348 "dma_device_type": 1 00:10:34.348 }, 00:10:34.348 { 00:10:34.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.348 "dma_device_type": 2 00:10:34.348 }, 00:10:34.348 { 00:10:34.348 "dma_device_id": "system", 00:10:34.348 "dma_device_type": 1 00:10:34.348 }, 00:10:34.348 { 00:10:34.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.348 "dma_device_type": 2 00:10:34.348 }, 00:10:34.348 { 00:10:34.348 "dma_device_id": "system", 00:10:34.348 "dma_device_type": 1 00:10:34.348 }, 00:10:34.348 { 00:10:34.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.348 "dma_device_type": 2 00:10:34.348 } 00:10:34.348 ], 00:10:34.348 "driver_specific": { 00:10:34.348 "raid": { 00:10:34.348 "uuid": "42a274d8-3d2b-4935-92f6-5791f08fb047", 00:10:34.348 "strip_size_kb": 64, 00:10:34.348 "state": "online", 00:10:34.348 "raid_level": "concat", 00:10:34.348 "superblock": true, 00:10:34.348 "num_base_bdevs": 4, 00:10:34.348 "num_base_bdevs_discovered": 4, 00:10:34.348 "num_base_bdevs_operational": 4, 00:10:34.348 "base_bdevs_list": [ 00:10:34.348 { 00:10:34.348 "name": "NewBaseBdev", 00:10:34.348 "uuid": "a34ae3d1-7e74-4b4d-a7dc-031c7e167101", 00:10:34.349 "is_configured": true, 00:10:34.349 "data_offset": 2048, 00:10:34.349 "data_size": 63488 00:10:34.349 }, 00:10:34.349 { 00:10:34.349 "name": "BaseBdev2", 00:10:34.349 "uuid": "388a6ccf-3165-4a29-9367-d4749f132df2", 00:10:34.349 "is_configured": true, 00:10:34.349 "data_offset": 2048, 00:10:34.349 "data_size": 63488 00:10:34.349 }, 00:10:34.349 { 00:10:34.349 "name": "BaseBdev3", 00:10:34.349 "uuid": "f2ed500f-535c-4fc3-ad28-86f5c9e2b767", 00:10:34.349 "is_configured": true, 00:10:34.349 "data_offset": 2048, 00:10:34.349 "data_size": 63488 00:10:34.349 }, 00:10:34.349 { 00:10:34.349 "name": "BaseBdev4", 00:10:34.349 "uuid": "8e3949b2-3084-4c36-984d-73cda5eb95b5", 00:10:34.349 "is_configured": true, 00:10:34.349 "data_offset": 2048, 00:10:34.349 "data_size": 63488 00:10:34.349 } 00:10:34.349 ] 00:10:34.349 } 00:10:34.349 } 00:10:34.349 }' 00:10:34.349 04:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:34.349 04:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:34.349 BaseBdev2 00:10:34.349 BaseBdev3 00:10:34.349 BaseBdev4' 00:10:34.349 04:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.349 04:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:34.349 04:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.349 04:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:34.349 04:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.349 04:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.349 04:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.609 04:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.609 04:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.609 04:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.609 04:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.609 04:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:34.609 04:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.609 04:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.609 04:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.609 04:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.609 04:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.609 04:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.609 04:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.609 04:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:34.609 04:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.609 04:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.609 04:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.609 04:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.609 04:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.609 04:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.609 04:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.609 04:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:34.609 04:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.609 04:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.609 04:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.609 04:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.609 04:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.609 04:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.609 04:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:34.609 04:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.609 04:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.609 [2024-12-13 04:26:34.538008] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:34.609 [2024-12-13 04:26:34.538077] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:34.609 [2024-12-13 04:26:34.538176] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:34.609 [2024-12-13 04:26:34.538268] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:34.609 [2024-12-13 04:26:34.538281] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:10:34.609 04:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.609 04:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 84506 00:10:34.609 04:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 84506 ']' 00:10:34.609 04:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 84506 00:10:34.609 04:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:34.609 04:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:34.609 04:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84506 00:10:34.609 04:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:34.609 04:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:34.609 killing process with pid 84506 00:10:34.609 04:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84506' 00:10:34.609 04:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 84506 00:10:34.609 [2024-12-13 04:26:34.583978] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:34.609 04:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 84506 00:10:34.868 [2024-12-13 04:26:34.658887] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:35.126 04:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:35.126 00:10:35.126 real 0m10.192s 00:10:35.126 user 0m17.178s 00:10:35.126 sys 0m2.263s 00:10:35.126 ************************************ 00:10:35.126 END TEST raid_state_function_test_sb 00:10:35.126 ************************************ 00:10:35.126 04:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:35.126 04:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.126 04:26:35 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:10:35.126 04:26:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:35.126 04:26:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:35.126 04:26:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:35.126 ************************************ 00:10:35.126 START TEST raid_superblock_test 00:10:35.126 ************************************ 00:10:35.126 04:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:10:35.126 04:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:35.126 04:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:35.126 04:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:35.126 04:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:35.126 04:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:35.126 04:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:35.126 04:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:35.126 04:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:35.126 04:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:35.126 04:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:35.126 04:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:35.126 04:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:35.126 04:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:35.126 04:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:35.126 04:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:35.126 04:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:35.126 04:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=85165 00:10:35.126 04:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:35.126 04:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 85165 00:10:35.126 04:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 85165 ']' 00:10:35.126 04:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.126 04:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:35.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.126 04:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.126 04:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:35.126 04:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.385 [2024-12-13 04:26:35.147108] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:10:35.385 [2024-12-13 04:26:35.147328] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85165 ] 00:10:35.385 [2024-12-13 04:26:35.281203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.385 [2024-12-13 04:26:35.320942] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.385 [2024-12-13 04:26:35.397135] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:35.385 [2024-12-13 04:26:35.397269] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:36.320 04:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:36.320 04:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:36.320 04:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:36.320 04:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:36.320 04:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:36.320 04:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:36.320 04:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:36.320 04:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:36.320 04:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:36.320 04:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:36.320 04:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:36.320 04:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.320 04:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.320 malloc1 00:10:36.320 04:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.320 04:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:36.320 04:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.320 04:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.320 [2024-12-13 04:26:36.005790] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:36.320 [2024-12-13 04:26:36.005941] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.320 [2024-12-13 04:26:36.005989] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:10:36.320 [2024-12-13 04:26:36.006032] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.320 [2024-12-13 04:26:36.008406] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.320 [2024-12-13 04:26:36.008501] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:36.320 pt1 00:10:36.320 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.320 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:36.320 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:36.320 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:36.320 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:36.320 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:36.320 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:36.320 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:36.320 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:36.320 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:36.320 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.320 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.320 malloc2 00:10:36.320 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.320 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:36.320 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.320 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.320 [2024-12-13 04:26:36.044164] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:36.320 [2024-12-13 04:26:36.044224] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.320 [2024-12-13 04:26:36.044244] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:36.320 [2024-12-13 04:26:36.044254] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.320 [2024-12-13 04:26:36.046637] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.320 [2024-12-13 04:26:36.046672] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:36.320 pt2 00:10:36.320 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.320 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:36.320 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:36.320 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:36.320 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:36.320 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:36.320 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:36.320 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:36.320 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:36.320 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:36.320 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.320 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.320 malloc3 00:10:36.320 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.320 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:36.320 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.320 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.320 [2024-12-13 04:26:36.078686] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:36.321 [2024-12-13 04:26:36.078793] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.321 [2024-12-13 04:26:36.078835] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:36.321 [2024-12-13 04:26:36.078867] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.321 [2024-12-13 04:26:36.081313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.321 [2024-12-13 04:26:36.081388] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:36.321 pt3 00:10:36.321 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.321 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:36.321 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:36.321 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:36.321 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:36.321 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:36.321 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:36.321 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:36.321 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:36.321 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:36.321 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.321 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.321 malloc4 00:10:36.321 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.321 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:36.321 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.321 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.321 [2024-12-13 04:26:36.125688] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:36.321 [2024-12-13 04:26:36.125800] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.321 [2024-12-13 04:26:36.125837] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:36.321 [2024-12-13 04:26:36.125874] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.321 [2024-12-13 04:26:36.128281] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.321 [2024-12-13 04:26:36.128356] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:36.321 pt4 00:10:36.321 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.321 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:36.321 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:36.321 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:36.321 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.321 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.321 [2024-12-13 04:26:36.137689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:36.321 [2024-12-13 04:26:36.139878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:36.321 [2024-12-13 04:26:36.139947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:36.321 [2024-12-13 04:26:36.140014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:36.321 [2024-12-13 04:26:36.140171] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:10:36.321 [2024-12-13 04:26:36.140184] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:36.321 [2024-12-13 04:26:36.140462] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:36.321 [2024-12-13 04:26:36.140605] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:10:36.321 [2024-12-13 04:26:36.140622] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:10:36.321 [2024-12-13 04:26:36.140756] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:36.321 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.321 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:36.321 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:36.321 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:36.321 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:36.321 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.321 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.321 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.321 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.321 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.321 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.321 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.321 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:36.321 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.321 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.321 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.321 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.321 "name": "raid_bdev1", 00:10:36.321 "uuid": "aa4df107-18d1-4222-bd70-9ee439fa3766", 00:10:36.321 "strip_size_kb": 64, 00:10:36.321 "state": "online", 00:10:36.321 "raid_level": "concat", 00:10:36.321 "superblock": true, 00:10:36.321 "num_base_bdevs": 4, 00:10:36.321 "num_base_bdevs_discovered": 4, 00:10:36.321 "num_base_bdevs_operational": 4, 00:10:36.321 "base_bdevs_list": [ 00:10:36.321 { 00:10:36.321 "name": "pt1", 00:10:36.321 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:36.321 "is_configured": true, 00:10:36.321 "data_offset": 2048, 00:10:36.321 "data_size": 63488 00:10:36.321 }, 00:10:36.321 { 00:10:36.321 "name": "pt2", 00:10:36.321 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:36.321 "is_configured": true, 00:10:36.321 "data_offset": 2048, 00:10:36.321 "data_size": 63488 00:10:36.321 }, 00:10:36.321 { 00:10:36.321 "name": "pt3", 00:10:36.321 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:36.321 "is_configured": true, 00:10:36.321 "data_offset": 2048, 00:10:36.321 "data_size": 63488 00:10:36.321 }, 00:10:36.321 { 00:10:36.321 "name": "pt4", 00:10:36.321 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:36.321 "is_configured": true, 00:10:36.321 "data_offset": 2048, 00:10:36.321 "data_size": 63488 00:10:36.321 } 00:10:36.321 ] 00:10:36.321 }' 00:10:36.321 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.321 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.580 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:36.580 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:36.580 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:36.580 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:36.580 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:36.580 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:36.580 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:36.580 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:36.580 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.580 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.839 [2024-12-13 04:26:36.597236] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:36.839 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.839 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:36.839 "name": "raid_bdev1", 00:10:36.839 "aliases": [ 00:10:36.839 "aa4df107-18d1-4222-bd70-9ee439fa3766" 00:10:36.839 ], 00:10:36.839 "product_name": "Raid Volume", 00:10:36.839 "block_size": 512, 00:10:36.839 "num_blocks": 253952, 00:10:36.839 "uuid": "aa4df107-18d1-4222-bd70-9ee439fa3766", 00:10:36.839 "assigned_rate_limits": { 00:10:36.839 "rw_ios_per_sec": 0, 00:10:36.839 "rw_mbytes_per_sec": 0, 00:10:36.839 "r_mbytes_per_sec": 0, 00:10:36.839 "w_mbytes_per_sec": 0 00:10:36.839 }, 00:10:36.839 "claimed": false, 00:10:36.839 "zoned": false, 00:10:36.839 "supported_io_types": { 00:10:36.839 "read": true, 00:10:36.839 "write": true, 00:10:36.839 "unmap": true, 00:10:36.839 "flush": true, 00:10:36.839 "reset": true, 00:10:36.839 "nvme_admin": false, 00:10:36.839 "nvme_io": false, 00:10:36.839 "nvme_io_md": false, 00:10:36.839 "write_zeroes": true, 00:10:36.839 "zcopy": false, 00:10:36.839 "get_zone_info": false, 00:10:36.839 "zone_management": false, 00:10:36.839 "zone_append": false, 00:10:36.839 "compare": false, 00:10:36.839 "compare_and_write": false, 00:10:36.839 "abort": false, 00:10:36.839 "seek_hole": false, 00:10:36.839 "seek_data": false, 00:10:36.839 "copy": false, 00:10:36.839 "nvme_iov_md": false 00:10:36.839 }, 00:10:36.839 "memory_domains": [ 00:10:36.839 { 00:10:36.839 "dma_device_id": "system", 00:10:36.839 "dma_device_type": 1 00:10:36.839 }, 00:10:36.839 { 00:10:36.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.839 "dma_device_type": 2 00:10:36.839 }, 00:10:36.839 { 00:10:36.839 "dma_device_id": "system", 00:10:36.839 "dma_device_type": 1 00:10:36.839 }, 00:10:36.839 { 00:10:36.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.839 "dma_device_type": 2 00:10:36.839 }, 00:10:36.839 { 00:10:36.839 "dma_device_id": "system", 00:10:36.839 "dma_device_type": 1 00:10:36.839 }, 00:10:36.839 { 00:10:36.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.839 "dma_device_type": 2 00:10:36.839 }, 00:10:36.839 { 00:10:36.839 "dma_device_id": "system", 00:10:36.839 "dma_device_type": 1 00:10:36.839 }, 00:10:36.839 { 00:10:36.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.839 "dma_device_type": 2 00:10:36.839 } 00:10:36.839 ], 00:10:36.839 "driver_specific": { 00:10:36.839 "raid": { 00:10:36.839 "uuid": "aa4df107-18d1-4222-bd70-9ee439fa3766", 00:10:36.839 "strip_size_kb": 64, 00:10:36.839 "state": "online", 00:10:36.839 "raid_level": "concat", 00:10:36.839 "superblock": true, 00:10:36.839 "num_base_bdevs": 4, 00:10:36.839 "num_base_bdevs_discovered": 4, 00:10:36.839 "num_base_bdevs_operational": 4, 00:10:36.839 "base_bdevs_list": [ 00:10:36.839 { 00:10:36.839 "name": "pt1", 00:10:36.839 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:36.839 "is_configured": true, 00:10:36.839 "data_offset": 2048, 00:10:36.839 "data_size": 63488 00:10:36.839 }, 00:10:36.839 { 00:10:36.839 "name": "pt2", 00:10:36.839 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:36.839 "is_configured": true, 00:10:36.839 "data_offset": 2048, 00:10:36.839 "data_size": 63488 00:10:36.839 }, 00:10:36.839 { 00:10:36.839 "name": "pt3", 00:10:36.839 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:36.839 "is_configured": true, 00:10:36.839 "data_offset": 2048, 00:10:36.839 "data_size": 63488 00:10:36.839 }, 00:10:36.839 { 00:10:36.839 "name": "pt4", 00:10:36.839 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:36.839 "is_configured": true, 00:10:36.839 "data_offset": 2048, 00:10:36.839 "data_size": 63488 00:10:36.839 } 00:10:36.839 ] 00:10:36.839 } 00:10:36.839 } 00:10:36.839 }' 00:10:36.839 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:36.839 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:36.839 pt2 00:10:36.839 pt3 00:10:36.839 pt4' 00:10:36.839 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.839 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:36.839 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.839 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:36.839 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.839 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.839 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.839 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.840 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.840 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.840 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.840 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:36.840 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.840 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.840 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.840 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.840 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.840 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.840 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.840 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.840 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:36.840 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.840 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.840 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.840 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.840 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.840 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.840 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:36.840 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.840 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.840 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.100 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.100 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:37.100 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:37.100 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:37.100 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:37.100 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.100 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.100 [2024-12-13 04:26:36.892872] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:37.100 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.100 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=aa4df107-18d1-4222-bd70-9ee439fa3766 00:10:37.100 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z aa4df107-18d1-4222-bd70-9ee439fa3766 ']' 00:10:37.100 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:37.100 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.100 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.100 [2024-12-13 04:26:36.940577] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:37.100 [2024-12-13 04:26:36.940621] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:37.100 [2024-12-13 04:26:36.940716] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:37.100 [2024-12-13 04:26:36.940801] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:37.100 [2024-12-13 04:26:36.940814] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:10:37.100 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.100 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.100 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.100 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.100 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:37.100 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.100 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:37.100 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:37.100 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:37.100 04:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:37.100 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.100 04:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.100 04:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.100 04:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:37.100 04:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:37.100 04:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.100 04:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.100 04:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.100 04:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:37.100 04:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:37.100 04:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.100 04:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.100 04:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.100 04:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:37.100 04:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:37.100 04:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.100 04:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.100 04:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.100 04:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:37.100 04:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.100 04:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:37.100 04:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.100 04:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.100 04:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:37.100 04:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:37.100 04:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:37.100 04:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:37.100 04:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:37.100 04:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:37.100 04:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:37.100 04:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:37.100 04:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:37.100 04:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.100 04:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.100 [2024-12-13 04:26:37.108410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:37.100 [2024-12-13 04:26:37.110689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:37.100 [2024-12-13 04:26:37.110737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:37.100 [2024-12-13 04:26:37.110767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:37.100 [2024-12-13 04:26:37.110820] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:37.100 [2024-12-13 04:26:37.110865] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:37.100 [2024-12-13 04:26:37.110883] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:37.100 [2024-12-13 04:26:37.110900] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:37.100 [2024-12-13 04:26:37.110913] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:37.101 [2024-12-13 04:26:37.110923] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:10:37.360 request: 00:10:37.361 { 00:10:37.361 "name": "raid_bdev1", 00:10:37.361 "raid_level": "concat", 00:10:37.361 "base_bdevs": [ 00:10:37.361 "malloc1", 00:10:37.361 "malloc2", 00:10:37.361 "malloc3", 00:10:37.361 "malloc4" 00:10:37.361 ], 00:10:37.361 "strip_size_kb": 64, 00:10:37.361 "superblock": false, 00:10:37.361 "method": "bdev_raid_create", 00:10:37.361 "req_id": 1 00:10:37.361 } 00:10:37.361 Got JSON-RPC error response 00:10:37.361 response: 00:10:37.361 { 00:10:37.361 "code": -17, 00:10:37.361 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:37.361 } 00:10:37.361 04:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:37.361 04:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:37.361 04:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:37.361 04:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:37.361 04:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:37.361 04:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.361 04:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.361 04:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.361 04:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:37.361 04:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.361 04:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:37.361 04:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:37.361 04:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:37.361 04:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.361 04:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.361 [2024-12-13 04:26:37.180274] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:37.361 [2024-12-13 04:26:37.180350] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:37.361 [2024-12-13 04:26:37.180376] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:37.361 [2024-12-13 04:26:37.180385] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:37.361 [2024-12-13 04:26:37.182983] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:37.361 [2024-12-13 04:26:37.183017] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:37.361 [2024-12-13 04:26:37.183102] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:37.361 [2024-12-13 04:26:37.183150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:37.361 pt1 00:10:37.361 04:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.361 04:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:37.361 04:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:37.361 04:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.361 04:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:37.361 04:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.361 04:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.361 04:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.361 04:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.361 04:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.361 04:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.361 04:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.361 04:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:37.361 04:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.361 04:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.361 04:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.361 04:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.361 "name": "raid_bdev1", 00:10:37.361 "uuid": "aa4df107-18d1-4222-bd70-9ee439fa3766", 00:10:37.361 "strip_size_kb": 64, 00:10:37.361 "state": "configuring", 00:10:37.361 "raid_level": "concat", 00:10:37.361 "superblock": true, 00:10:37.361 "num_base_bdevs": 4, 00:10:37.361 "num_base_bdevs_discovered": 1, 00:10:37.361 "num_base_bdevs_operational": 4, 00:10:37.361 "base_bdevs_list": [ 00:10:37.361 { 00:10:37.361 "name": "pt1", 00:10:37.361 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:37.361 "is_configured": true, 00:10:37.361 "data_offset": 2048, 00:10:37.361 "data_size": 63488 00:10:37.361 }, 00:10:37.361 { 00:10:37.361 "name": null, 00:10:37.361 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:37.361 "is_configured": false, 00:10:37.361 "data_offset": 2048, 00:10:37.361 "data_size": 63488 00:10:37.361 }, 00:10:37.361 { 00:10:37.361 "name": null, 00:10:37.361 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:37.361 "is_configured": false, 00:10:37.361 "data_offset": 2048, 00:10:37.361 "data_size": 63488 00:10:37.361 }, 00:10:37.361 { 00:10:37.361 "name": null, 00:10:37.361 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:37.361 "is_configured": false, 00:10:37.361 "data_offset": 2048, 00:10:37.361 "data_size": 63488 00:10:37.361 } 00:10:37.361 ] 00:10:37.361 }' 00:10:37.361 04:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.361 04:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.621 04:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:37.621 04:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:37.621 04:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.621 04:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.621 [2024-12-13 04:26:37.607584] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:37.621 [2024-12-13 04:26:37.607746] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:37.621 [2024-12-13 04:26:37.607792] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:37.621 [2024-12-13 04:26:37.607822] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:37.621 [2024-12-13 04:26:37.608345] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:37.621 [2024-12-13 04:26:37.608402] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:37.621 [2024-12-13 04:26:37.608544] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:37.621 [2024-12-13 04:26:37.608605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:37.621 pt2 00:10:37.621 04:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.621 04:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:37.621 04:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.621 04:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.621 [2024-12-13 04:26:37.619575] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:37.621 04:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.621 04:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:37.621 04:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:37.621 04:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.621 04:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:37.621 04:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.621 04:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.621 04:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.621 04:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.621 04:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.621 04:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.621 04:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:37.621 04:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.621 04:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.621 04:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.880 04:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.880 04:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.880 "name": "raid_bdev1", 00:10:37.880 "uuid": "aa4df107-18d1-4222-bd70-9ee439fa3766", 00:10:37.880 "strip_size_kb": 64, 00:10:37.880 "state": "configuring", 00:10:37.880 "raid_level": "concat", 00:10:37.880 "superblock": true, 00:10:37.880 "num_base_bdevs": 4, 00:10:37.880 "num_base_bdevs_discovered": 1, 00:10:37.880 "num_base_bdevs_operational": 4, 00:10:37.880 "base_bdevs_list": [ 00:10:37.880 { 00:10:37.880 "name": "pt1", 00:10:37.880 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:37.881 "is_configured": true, 00:10:37.881 "data_offset": 2048, 00:10:37.881 "data_size": 63488 00:10:37.881 }, 00:10:37.881 { 00:10:37.881 "name": null, 00:10:37.881 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:37.881 "is_configured": false, 00:10:37.881 "data_offset": 0, 00:10:37.881 "data_size": 63488 00:10:37.881 }, 00:10:37.881 { 00:10:37.881 "name": null, 00:10:37.881 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:37.881 "is_configured": false, 00:10:37.881 "data_offset": 2048, 00:10:37.881 "data_size": 63488 00:10:37.881 }, 00:10:37.881 { 00:10:37.881 "name": null, 00:10:37.881 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:37.881 "is_configured": false, 00:10:37.881 "data_offset": 2048, 00:10:37.881 "data_size": 63488 00:10:37.881 } 00:10:37.881 ] 00:10:37.881 }' 00:10:37.881 04:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.881 04:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.140 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:38.140 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:38.140 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:38.140 04:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.140 04:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.140 [2024-12-13 04:26:38.022865] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:38.140 [2024-12-13 04:26:38.022947] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.140 [2024-12-13 04:26:38.022972] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:38.140 [2024-12-13 04:26:38.022984] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.140 [2024-12-13 04:26:38.023474] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.140 [2024-12-13 04:26:38.023499] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:38.140 [2024-12-13 04:26:38.023587] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:38.140 [2024-12-13 04:26:38.023624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:38.140 pt2 00:10:38.140 04:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.140 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:38.140 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:38.140 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:38.140 04:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.140 04:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.140 [2024-12-13 04:26:38.034766] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:38.140 [2024-12-13 04:26:38.034818] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.140 [2024-12-13 04:26:38.034850] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:38.140 [2024-12-13 04:26:38.034860] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.140 [2024-12-13 04:26:38.035230] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.140 [2024-12-13 04:26:38.035249] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:38.140 [2024-12-13 04:26:38.035306] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:38.140 [2024-12-13 04:26:38.035329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:38.140 pt3 00:10:38.140 04:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.140 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:38.140 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:38.140 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:38.140 04:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.140 04:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.140 [2024-12-13 04:26:38.046746] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:38.140 [2024-12-13 04:26:38.046815] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.140 [2024-12-13 04:26:38.046828] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:38.140 [2024-12-13 04:26:38.046839] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.140 [2024-12-13 04:26:38.047148] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.140 [2024-12-13 04:26:38.047166] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:38.140 [2024-12-13 04:26:38.047219] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:38.140 [2024-12-13 04:26:38.047239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:38.140 [2024-12-13 04:26:38.047355] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:10:38.140 [2024-12-13 04:26:38.047370] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:38.140 [2024-12-13 04:26:38.047635] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:10:38.140 [2024-12-13 04:26:38.047760] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:10:38.140 [2024-12-13 04:26:38.047768] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:10:38.140 [2024-12-13 04:26:38.047873] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:38.140 pt4 00:10:38.140 04:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.140 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:38.140 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:38.140 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:38.140 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:38.140 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:38.140 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:38.140 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.140 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.140 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.140 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.140 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.140 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.140 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:38.140 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.140 04:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.140 04:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.140 04:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.140 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.140 "name": "raid_bdev1", 00:10:38.140 "uuid": "aa4df107-18d1-4222-bd70-9ee439fa3766", 00:10:38.140 "strip_size_kb": 64, 00:10:38.140 "state": "online", 00:10:38.140 "raid_level": "concat", 00:10:38.140 "superblock": true, 00:10:38.140 "num_base_bdevs": 4, 00:10:38.140 "num_base_bdevs_discovered": 4, 00:10:38.140 "num_base_bdevs_operational": 4, 00:10:38.140 "base_bdevs_list": [ 00:10:38.140 { 00:10:38.140 "name": "pt1", 00:10:38.140 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:38.140 "is_configured": true, 00:10:38.140 "data_offset": 2048, 00:10:38.140 "data_size": 63488 00:10:38.140 }, 00:10:38.140 { 00:10:38.140 "name": "pt2", 00:10:38.140 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:38.140 "is_configured": true, 00:10:38.140 "data_offset": 2048, 00:10:38.140 "data_size": 63488 00:10:38.140 }, 00:10:38.140 { 00:10:38.140 "name": "pt3", 00:10:38.140 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:38.140 "is_configured": true, 00:10:38.140 "data_offset": 2048, 00:10:38.140 "data_size": 63488 00:10:38.140 }, 00:10:38.140 { 00:10:38.140 "name": "pt4", 00:10:38.140 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:38.140 "is_configured": true, 00:10:38.140 "data_offset": 2048, 00:10:38.140 "data_size": 63488 00:10:38.140 } 00:10:38.140 ] 00:10:38.140 }' 00:10:38.140 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.140 04:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.709 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:38.709 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:38.709 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:38.709 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:38.709 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:38.709 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:38.709 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:38.709 04:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.709 04:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.709 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:38.709 [2024-12-13 04:26:38.502341] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:38.709 04:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.709 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:38.709 "name": "raid_bdev1", 00:10:38.709 "aliases": [ 00:10:38.709 "aa4df107-18d1-4222-bd70-9ee439fa3766" 00:10:38.709 ], 00:10:38.709 "product_name": "Raid Volume", 00:10:38.709 "block_size": 512, 00:10:38.709 "num_blocks": 253952, 00:10:38.709 "uuid": "aa4df107-18d1-4222-bd70-9ee439fa3766", 00:10:38.709 "assigned_rate_limits": { 00:10:38.709 "rw_ios_per_sec": 0, 00:10:38.709 "rw_mbytes_per_sec": 0, 00:10:38.709 "r_mbytes_per_sec": 0, 00:10:38.709 "w_mbytes_per_sec": 0 00:10:38.709 }, 00:10:38.709 "claimed": false, 00:10:38.709 "zoned": false, 00:10:38.709 "supported_io_types": { 00:10:38.709 "read": true, 00:10:38.709 "write": true, 00:10:38.709 "unmap": true, 00:10:38.709 "flush": true, 00:10:38.709 "reset": true, 00:10:38.709 "nvme_admin": false, 00:10:38.709 "nvme_io": false, 00:10:38.709 "nvme_io_md": false, 00:10:38.709 "write_zeroes": true, 00:10:38.709 "zcopy": false, 00:10:38.709 "get_zone_info": false, 00:10:38.709 "zone_management": false, 00:10:38.709 "zone_append": false, 00:10:38.709 "compare": false, 00:10:38.709 "compare_and_write": false, 00:10:38.709 "abort": false, 00:10:38.709 "seek_hole": false, 00:10:38.709 "seek_data": false, 00:10:38.709 "copy": false, 00:10:38.709 "nvme_iov_md": false 00:10:38.709 }, 00:10:38.709 "memory_domains": [ 00:10:38.709 { 00:10:38.709 "dma_device_id": "system", 00:10:38.709 "dma_device_type": 1 00:10:38.709 }, 00:10:38.709 { 00:10:38.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.709 "dma_device_type": 2 00:10:38.709 }, 00:10:38.709 { 00:10:38.709 "dma_device_id": "system", 00:10:38.709 "dma_device_type": 1 00:10:38.709 }, 00:10:38.709 { 00:10:38.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.709 "dma_device_type": 2 00:10:38.709 }, 00:10:38.709 { 00:10:38.709 "dma_device_id": "system", 00:10:38.709 "dma_device_type": 1 00:10:38.709 }, 00:10:38.709 { 00:10:38.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.709 "dma_device_type": 2 00:10:38.709 }, 00:10:38.709 { 00:10:38.709 "dma_device_id": "system", 00:10:38.709 "dma_device_type": 1 00:10:38.709 }, 00:10:38.709 { 00:10:38.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.709 "dma_device_type": 2 00:10:38.709 } 00:10:38.709 ], 00:10:38.709 "driver_specific": { 00:10:38.709 "raid": { 00:10:38.709 "uuid": "aa4df107-18d1-4222-bd70-9ee439fa3766", 00:10:38.709 "strip_size_kb": 64, 00:10:38.709 "state": "online", 00:10:38.709 "raid_level": "concat", 00:10:38.709 "superblock": true, 00:10:38.709 "num_base_bdevs": 4, 00:10:38.709 "num_base_bdevs_discovered": 4, 00:10:38.709 "num_base_bdevs_operational": 4, 00:10:38.709 "base_bdevs_list": [ 00:10:38.709 { 00:10:38.709 "name": "pt1", 00:10:38.709 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:38.709 "is_configured": true, 00:10:38.709 "data_offset": 2048, 00:10:38.709 "data_size": 63488 00:10:38.709 }, 00:10:38.709 { 00:10:38.709 "name": "pt2", 00:10:38.709 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:38.709 "is_configured": true, 00:10:38.709 "data_offset": 2048, 00:10:38.709 "data_size": 63488 00:10:38.709 }, 00:10:38.709 { 00:10:38.709 "name": "pt3", 00:10:38.709 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:38.709 "is_configured": true, 00:10:38.709 "data_offset": 2048, 00:10:38.709 "data_size": 63488 00:10:38.709 }, 00:10:38.709 { 00:10:38.709 "name": "pt4", 00:10:38.709 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:38.709 "is_configured": true, 00:10:38.709 "data_offset": 2048, 00:10:38.709 "data_size": 63488 00:10:38.709 } 00:10:38.710 ] 00:10:38.710 } 00:10:38.710 } 00:10:38.710 }' 00:10:38.710 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:38.710 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:38.710 pt2 00:10:38.710 pt3 00:10:38.710 pt4' 00:10:38.710 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.710 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:38.710 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.710 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:38.710 04:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.710 04:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.710 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.710 04:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.710 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.710 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.710 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.710 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.710 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:38.710 04:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.710 04:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.710 04:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.969 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.969 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.969 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.969 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:38.969 04:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.969 04:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.969 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.969 04:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.969 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.969 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.969 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.969 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:38.969 04:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.969 04:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.970 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.970 04:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.970 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.970 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.970 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:38.970 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:38.970 04:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.970 04:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.970 [2024-12-13 04:26:38.857692] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:38.970 04:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.970 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' aa4df107-18d1-4222-bd70-9ee439fa3766 '!=' aa4df107-18d1-4222-bd70-9ee439fa3766 ']' 00:10:38.970 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:38.970 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:38.970 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:38.970 04:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 85165 00:10:38.970 04:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 85165 ']' 00:10:38.970 04:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 85165 00:10:38.970 04:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:38.970 04:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:38.970 04:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85165 00:10:38.970 killing process with pid 85165 00:10:38.970 04:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:38.970 04:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:38.970 04:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85165' 00:10:38.970 04:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 85165 00:10:38.970 [2024-12-13 04:26:38.938524] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:38.970 [2024-12-13 04:26:38.938613] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:38.970 [2024-12-13 04:26:38.938689] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:38.970 [2024-12-13 04:26:38.938702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:10:38.970 04:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 85165 00:10:39.229 [2024-12-13 04:26:39.017426] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:39.488 04:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:39.489 00:10:39.489 real 0m4.288s 00:10:39.489 user 0m6.559s 00:10:39.489 sys 0m0.994s 00:10:39.489 04:26:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:39.489 04:26:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.489 ************************************ 00:10:39.489 END TEST raid_superblock_test 00:10:39.489 ************************************ 00:10:39.489 04:26:39 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:10:39.489 04:26:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:39.489 04:26:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:39.489 04:26:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:39.489 ************************************ 00:10:39.489 START TEST raid_read_error_test 00:10:39.489 ************************************ 00:10:39.489 04:26:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:10:39.489 04:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:39.489 04:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:39.489 04:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:39.489 04:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:39.489 04:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:39.489 04:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:39.489 04:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:39.489 04:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:39.489 04:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:39.489 04:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:39.489 04:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:39.489 04:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:39.489 04:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:39.489 04:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:39.489 04:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:39.489 04:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:39.489 04:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:39.489 04:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:39.489 04:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:39.489 04:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:39.489 04:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:39.489 04:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:39.489 04:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:39.489 04:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:39.489 04:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:39.489 04:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:39.489 04:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:39.489 04:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:39.489 04:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.flrB988WbH 00:10:39.489 04:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=85413 00:10:39.489 04:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:39.489 04:26:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 85413 00:10:39.489 04:26:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 85413 ']' 00:10:39.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.489 04:26:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.489 04:26:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:39.489 04:26:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.489 04:26:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:39.489 04:26:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.749 [2024-12-13 04:26:39.527603] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:10:39.749 [2024-12-13 04:26:39.527718] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85413 ] 00:10:39.749 [2024-12-13 04:26:39.681714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.749 [2024-12-13 04:26:39.720752] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.008 [2024-12-13 04:26:39.796343] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:40.008 [2024-12-13 04:26:39.796383] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:40.577 04:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:40.577 04:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:40.577 04:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:40.577 04:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:40.577 04:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.577 04:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.577 BaseBdev1_malloc 00:10:40.577 04:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.577 04:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:40.577 04:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.577 04:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.577 true 00:10:40.577 04:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.577 04:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:40.577 04:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.577 04:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.577 [2024-12-13 04:26:40.385046] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:40.577 [2024-12-13 04:26:40.385186] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:40.577 [2024-12-13 04:26:40.385219] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:10:40.577 [2024-12-13 04:26:40.385236] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:40.577 [2024-12-13 04:26:40.387611] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:40.577 [2024-12-13 04:26:40.387651] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:40.577 BaseBdev1 00:10:40.577 04:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.577 04:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:40.577 04:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:40.577 04:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.577 04:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.577 BaseBdev2_malloc 00:10:40.577 04:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.577 04:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:40.577 04:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.577 04:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.577 true 00:10:40.577 04:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.577 04:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:40.577 04:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.577 04:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.577 [2024-12-13 04:26:40.431541] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:40.577 [2024-12-13 04:26:40.431594] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:40.577 [2024-12-13 04:26:40.431618] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:10:40.577 [2024-12-13 04:26:40.431636] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:40.577 [2024-12-13 04:26:40.433978] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:40.577 [2024-12-13 04:26:40.434017] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:40.577 BaseBdev2 00:10:40.577 04:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.577 04:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:40.577 04:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:40.577 04:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.577 04:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.577 BaseBdev3_malloc 00:10:40.577 04:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.577 04:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:40.577 04:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.577 04:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.577 true 00:10:40.577 04:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.577 04:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:40.577 04:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.577 04:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.577 [2024-12-13 04:26:40.478117] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:40.577 [2024-12-13 04:26:40.478168] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:40.577 [2024-12-13 04:26:40.478194] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:10:40.577 [2024-12-13 04:26:40.478203] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:40.577 [2024-12-13 04:26:40.480590] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:40.577 [2024-12-13 04:26:40.480628] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:40.577 BaseBdev3 00:10:40.577 04:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.577 04:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:40.577 04:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:40.577 04:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.577 04:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.577 BaseBdev4_malloc 00:10:40.578 04:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.578 04:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:40.578 04:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.578 04:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.578 true 00:10:40.578 04:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.578 04:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:40.578 04:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.578 04:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.578 [2024-12-13 04:26:40.523896] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:40.578 [2024-12-13 04:26:40.523947] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:40.578 [2024-12-13 04:26:40.523974] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:40.578 [2024-12-13 04:26:40.523983] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:40.578 [2024-12-13 04:26:40.526534] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:40.578 [2024-12-13 04:26:40.526569] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:40.578 BaseBdev4 00:10:40.578 04:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.578 04:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:40.578 04:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.578 04:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.578 [2024-12-13 04:26:40.531927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:40.578 [2024-12-13 04:26:40.534021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:40.578 [2024-12-13 04:26:40.534119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:40.578 [2024-12-13 04:26:40.534171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:40.578 [2024-12-13 04:26:40.534380] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:10:40.578 [2024-12-13 04:26:40.534392] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:40.578 [2024-12-13 04:26:40.534676] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002ef0 00:10:40.578 [2024-12-13 04:26:40.534841] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:10:40.578 [2024-12-13 04:26:40.534860] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:10:40.578 [2024-12-13 04:26:40.534979] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:40.578 04:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.578 04:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:40.578 04:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:40.578 04:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:40.578 04:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:40.578 04:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.578 04:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.578 04:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.578 04:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.578 04:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.578 04:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.578 04:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.578 04:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:40.578 04:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.578 04:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.578 04:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.838 04:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.838 "name": "raid_bdev1", 00:10:40.838 "uuid": "c059082b-2c19-4201-bda6-52f7079a9905", 00:10:40.838 "strip_size_kb": 64, 00:10:40.838 "state": "online", 00:10:40.838 "raid_level": "concat", 00:10:40.838 "superblock": true, 00:10:40.838 "num_base_bdevs": 4, 00:10:40.838 "num_base_bdevs_discovered": 4, 00:10:40.838 "num_base_bdevs_operational": 4, 00:10:40.838 "base_bdevs_list": [ 00:10:40.838 { 00:10:40.838 "name": "BaseBdev1", 00:10:40.838 "uuid": "6976d099-050b-5cc4-8a57-7f21d542cfcd", 00:10:40.838 "is_configured": true, 00:10:40.838 "data_offset": 2048, 00:10:40.838 "data_size": 63488 00:10:40.838 }, 00:10:40.838 { 00:10:40.838 "name": "BaseBdev2", 00:10:40.838 "uuid": "ad260c40-eb34-56e2-8a4d-dd4a9912fd40", 00:10:40.838 "is_configured": true, 00:10:40.838 "data_offset": 2048, 00:10:40.838 "data_size": 63488 00:10:40.838 }, 00:10:40.838 { 00:10:40.838 "name": "BaseBdev3", 00:10:40.838 "uuid": "1e3b7a65-6724-514c-a7f5-752213d49178", 00:10:40.838 "is_configured": true, 00:10:40.838 "data_offset": 2048, 00:10:40.838 "data_size": 63488 00:10:40.838 }, 00:10:40.838 { 00:10:40.838 "name": "BaseBdev4", 00:10:40.838 "uuid": "ad880a03-18d3-5076-a144-174929bb7123", 00:10:40.838 "is_configured": true, 00:10:40.838 "data_offset": 2048, 00:10:40.838 "data_size": 63488 00:10:40.838 } 00:10:40.838 ] 00:10:40.838 }' 00:10:40.838 04:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.838 04:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.097 04:26:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:41.097 04:26:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:41.097 [2024-12-13 04:26:41.099475] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000003090 00:10:42.035 04:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:42.035 04:26:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.035 04:26:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.035 04:26:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.035 04:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:42.035 04:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:42.035 04:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:42.035 04:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:42.035 04:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:42.035 04:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:42.035 04:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:42.035 04:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.035 04:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.035 04:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.035 04:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.035 04:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.035 04:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.035 04:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:42.035 04:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.035 04:26:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.035 04:26:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.294 04:26:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.294 04:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.294 "name": "raid_bdev1", 00:10:42.294 "uuid": "c059082b-2c19-4201-bda6-52f7079a9905", 00:10:42.294 "strip_size_kb": 64, 00:10:42.294 "state": "online", 00:10:42.294 "raid_level": "concat", 00:10:42.294 "superblock": true, 00:10:42.294 "num_base_bdevs": 4, 00:10:42.294 "num_base_bdevs_discovered": 4, 00:10:42.294 "num_base_bdevs_operational": 4, 00:10:42.294 "base_bdevs_list": [ 00:10:42.294 { 00:10:42.294 "name": "BaseBdev1", 00:10:42.294 "uuid": "6976d099-050b-5cc4-8a57-7f21d542cfcd", 00:10:42.294 "is_configured": true, 00:10:42.294 "data_offset": 2048, 00:10:42.294 "data_size": 63488 00:10:42.294 }, 00:10:42.294 { 00:10:42.294 "name": "BaseBdev2", 00:10:42.294 "uuid": "ad260c40-eb34-56e2-8a4d-dd4a9912fd40", 00:10:42.294 "is_configured": true, 00:10:42.294 "data_offset": 2048, 00:10:42.294 "data_size": 63488 00:10:42.294 }, 00:10:42.294 { 00:10:42.294 "name": "BaseBdev3", 00:10:42.294 "uuid": "1e3b7a65-6724-514c-a7f5-752213d49178", 00:10:42.294 "is_configured": true, 00:10:42.294 "data_offset": 2048, 00:10:42.294 "data_size": 63488 00:10:42.294 }, 00:10:42.294 { 00:10:42.294 "name": "BaseBdev4", 00:10:42.294 "uuid": "ad880a03-18d3-5076-a144-174929bb7123", 00:10:42.294 "is_configured": true, 00:10:42.294 "data_offset": 2048, 00:10:42.294 "data_size": 63488 00:10:42.294 } 00:10:42.294 ] 00:10:42.294 }' 00:10:42.294 04:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.294 04:26:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.554 04:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:42.554 04:26:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.554 04:26:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.554 [2024-12-13 04:26:42.444643] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:42.554 [2024-12-13 04:26:42.444769] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:42.554 [2024-12-13 04:26:42.447347] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:42.554 [2024-12-13 04:26:42.447402] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:42.554 [2024-12-13 04:26:42.447461] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:42.554 [2024-12-13 04:26:42.447479] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:10:42.554 { 00:10:42.554 "results": [ 00:10:42.554 { 00:10:42.554 "job": "raid_bdev1", 00:10:42.554 "core_mask": "0x1", 00:10:42.554 "workload": "randrw", 00:10:42.554 "percentage": 50, 00:10:42.554 "status": "finished", 00:10:42.554 "queue_depth": 1, 00:10:42.554 "io_size": 131072, 00:10:42.554 "runtime": 1.345868, 00:10:42.554 "iops": 14491.762936632716, 00:10:42.554 "mibps": 1811.4703670790896, 00:10:42.554 "io_failed": 1, 00:10:42.554 "io_timeout": 0, 00:10:42.554 "avg_latency_us": 96.76564383334694, 00:10:42.554 "min_latency_us": 25.152838427947597, 00:10:42.554 "max_latency_us": 1409.4532751091704 00:10:42.554 } 00:10:42.554 ], 00:10:42.554 "core_count": 1 00:10:42.554 } 00:10:42.554 04:26:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.554 04:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 85413 00:10:42.554 04:26:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 85413 ']' 00:10:42.554 04:26:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 85413 00:10:42.554 04:26:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:42.554 04:26:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:42.554 04:26:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85413 00:10:42.554 killing process with pid 85413 00:10:42.554 04:26:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:42.554 04:26:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:42.554 04:26:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85413' 00:10:42.554 04:26:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 85413 00:10:42.554 [2024-12-13 04:26:42.479969] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:42.554 04:26:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 85413 00:10:42.554 [2024-12-13 04:26:42.546063] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:43.123 04:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.flrB988WbH 00:10:43.124 04:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:43.124 04:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:43.124 04:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:10:43.124 04:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:43.124 04:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:43.124 04:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:43.124 04:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:10:43.124 00:10:43.124 real 0m3.458s 00:10:43.124 user 0m4.207s 00:10:43.124 sys 0m0.630s 00:10:43.124 04:26:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.124 04:26:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.124 ************************************ 00:10:43.124 END TEST raid_read_error_test 00:10:43.124 ************************************ 00:10:43.124 04:26:42 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:10:43.124 04:26:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:43.124 04:26:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.124 04:26:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:43.124 ************************************ 00:10:43.124 START TEST raid_write_error_test 00:10:43.124 ************************************ 00:10:43.124 04:26:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:10:43.124 04:26:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:43.124 04:26:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:43.124 04:26:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:43.124 04:26:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:43.124 04:26:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:43.124 04:26:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:43.124 04:26:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:43.124 04:26:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:43.124 04:26:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:43.124 04:26:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:43.124 04:26:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:43.124 04:26:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:43.124 04:26:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:43.124 04:26:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:43.124 04:26:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:43.124 04:26:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:43.124 04:26:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:43.124 04:26:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:43.124 04:26:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:43.124 04:26:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:43.124 04:26:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:43.124 04:26:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:43.124 04:26:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:43.124 04:26:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:43.124 04:26:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:43.124 04:26:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:43.124 04:26:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:43.124 04:26:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:43.124 04:26:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.3GJFKcwcqO 00:10:43.124 04:26:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=85542 00:10:43.124 04:26:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:43.124 04:26:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 85542 00:10:43.124 04:26:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 85542 ']' 00:10:43.124 04:26:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.124 04:26:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:43.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.124 04:26:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.124 04:26:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:43.124 04:26:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.124 [2024-12-13 04:26:43.057850] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:10:43.124 [2024-12-13 04:26:43.058480] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85542 ] 00:10:43.383 [2024-12-13 04:26:43.214627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.383 [2024-12-13 04:26:43.252795] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.383 [2024-12-13 04:26:43.327971] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:43.383 [2024-12-13 04:26:43.328094] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:43.952 04:26:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:43.952 04:26:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:43.952 04:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:43.952 04:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:43.952 04:26:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.952 04:26:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.952 BaseBdev1_malloc 00:10:43.952 04:26:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.952 04:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:43.952 04:26:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.952 04:26:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.952 true 00:10:43.952 04:26:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.952 04:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:43.952 04:26:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.952 04:26:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.952 [2024-12-13 04:26:43.916384] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:43.952 [2024-12-13 04:26:43.916471] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.952 [2024-12-13 04:26:43.916513] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:10:43.952 [2024-12-13 04:26:43.916522] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.952 [2024-12-13 04:26:43.918918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.953 [2024-12-13 04:26:43.918951] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:43.953 BaseBdev1 00:10:43.953 04:26:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.953 04:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:43.953 04:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:43.953 04:26:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.953 04:26:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.953 BaseBdev2_malloc 00:10:43.953 04:26:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.953 04:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:43.953 04:26:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.953 04:26:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.953 true 00:10:43.953 04:26:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.953 04:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:43.953 04:26:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.953 04:26:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.953 [2024-12-13 04:26:43.962814] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:43.953 [2024-12-13 04:26:43.962864] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.953 [2024-12-13 04:26:43.962885] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:10:43.953 [2024-12-13 04:26:43.962902] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.953 [2024-12-13 04:26:43.965275] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.953 [2024-12-13 04:26:43.965382] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:44.213 BaseBdev2 00:10:44.213 04:26:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.213 04:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:44.213 04:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:44.213 04:26:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.213 04:26:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.213 BaseBdev3_malloc 00:10:44.213 04:26:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.213 04:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:44.213 04:26:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.213 04:26:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.213 true 00:10:44.213 04:26:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.213 04:26:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:44.213 04:26:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.213 04:26:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.213 [2024-12-13 04:26:44.009293] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:44.213 [2024-12-13 04:26:44.009346] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.213 [2024-12-13 04:26:44.009369] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:10:44.213 [2024-12-13 04:26:44.009378] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.213 [2024-12-13 04:26:44.011695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.213 [2024-12-13 04:26:44.011727] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:44.213 BaseBdev3 00:10:44.213 04:26:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.213 04:26:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:44.213 04:26:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:44.213 04:26:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.213 04:26:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.213 BaseBdev4_malloc 00:10:44.213 04:26:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.213 04:26:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:44.213 04:26:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.213 04:26:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.213 true 00:10:44.213 04:26:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.213 04:26:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:44.213 04:26:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.213 04:26:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.213 [2024-12-13 04:26:44.069644] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:44.213 [2024-12-13 04:26:44.069698] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.213 [2024-12-13 04:26:44.069725] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:44.213 [2024-12-13 04:26:44.069734] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.213 [2024-12-13 04:26:44.072097] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.213 [2024-12-13 04:26:44.072134] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:44.213 BaseBdev4 00:10:44.213 04:26:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.213 04:26:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:44.213 04:26:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.213 04:26:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.213 [2024-12-13 04:26:44.081650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:44.213 [2024-12-13 04:26:44.083695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:44.213 [2024-12-13 04:26:44.083775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:44.213 [2024-12-13 04:26:44.083825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:44.213 [2024-12-13 04:26:44.084033] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:10:44.213 [2024-12-13 04:26:44.084045] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:44.213 [2024-12-13 04:26:44.084290] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002ef0 00:10:44.213 [2024-12-13 04:26:44.084459] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:10:44.213 [2024-12-13 04:26:44.084474] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:10:44.213 [2024-12-13 04:26:44.084601] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:44.213 04:26:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.213 04:26:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:44.213 04:26:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:44.214 04:26:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:44.214 04:26:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:44.214 04:26:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.214 04:26:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.214 04:26:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.214 04:26:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.214 04:26:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.214 04:26:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.214 04:26:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.214 04:26:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:44.214 04:26:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.214 04:26:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.214 04:26:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.214 04:26:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.214 "name": "raid_bdev1", 00:10:44.214 "uuid": "7b7d8b4a-60d3-4355-b15a-837fae06a334", 00:10:44.214 "strip_size_kb": 64, 00:10:44.214 "state": "online", 00:10:44.214 "raid_level": "concat", 00:10:44.214 "superblock": true, 00:10:44.214 "num_base_bdevs": 4, 00:10:44.214 "num_base_bdevs_discovered": 4, 00:10:44.214 "num_base_bdevs_operational": 4, 00:10:44.214 "base_bdevs_list": [ 00:10:44.214 { 00:10:44.214 "name": "BaseBdev1", 00:10:44.214 "uuid": "16682133-b7e8-51c2-92c2-ec3a174fae18", 00:10:44.214 "is_configured": true, 00:10:44.214 "data_offset": 2048, 00:10:44.214 "data_size": 63488 00:10:44.214 }, 00:10:44.214 { 00:10:44.214 "name": "BaseBdev2", 00:10:44.214 "uuid": "05145ffb-9d22-5c06-8892-f1a530c45886", 00:10:44.214 "is_configured": true, 00:10:44.214 "data_offset": 2048, 00:10:44.214 "data_size": 63488 00:10:44.214 }, 00:10:44.214 { 00:10:44.214 "name": "BaseBdev3", 00:10:44.214 "uuid": "7a26c093-3123-560a-939d-17fd4653e748", 00:10:44.214 "is_configured": true, 00:10:44.214 "data_offset": 2048, 00:10:44.214 "data_size": 63488 00:10:44.214 }, 00:10:44.214 { 00:10:44.214 "name": "BaseBdev4", 00:10:44.214 "uuid": "d5bfff62-ad5d-510b-b823-3eff9f584998", 00:10:44.214 "is_configured": true, 00:10:44.214 "data_offset": 2048, 00:10:44.214 "data_size": 63488 00:10:44.214 } 00:10:44.214 ] 00:10:44.214 }' 00:10:44.214 04:26:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.214 04:26:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.782 04:26:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:44.782 04:26:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:44.782 [2024-12-13 04:26:44.629256] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000003090 00:10:45.718 04:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:45.718 04:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.718 04:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.718 04:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.718 04:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:45.718 04:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:45.718 04:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:45.718 04:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:45.718 04:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:45.718 04:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:45.718 04:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:45.718 04:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.718 04:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.718 04:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.718 04:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.718 04:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.718 04:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.718 04:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:45.718 04:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.718 04:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.718 04:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.718 04:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.718 04:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.718 "name": "raid_bdev1", 00:10:45.719 "uuid": "7b7d8b4a-60d3-4355-b15a-837fae06a334", 00:10:45.719 "strip_size_kb": 64, 00:10:45.719 "state": "online", 00:10:45.719 "raid_level": "concat", 00:10:45.719 "superblock": true, 00:10:45.719 "num_base_bdevs": 4, 00:10:45.719 "num_base_bdevs_discovered": 4, 00:10:45.719 "num_base_bdevs_operational": 4, 00:10:45.719 "base_bdevs_list": [ 00:10:45.719 { 00:10:45.719 "name": "BaseBdev1", 00:10:45.719 "uuid": "16682133-b7e8-51c2-92c2-ec3a174fae18", 00:10:45.719 "is_configured": true, 00:10:45.719 "data_offset": 2048, 00:10:45.719 "data_size": 63488 00:10:45.719 }, 00:10:45.719 { 00:10:45.719 "name": "BaseBdev2", 00:10:45.719 "uuid": "05145ffb-9d22-5c06-8892-f1a530c45886", 00:10:45.719 "is_configured": true, 00:10:45.719 "data_offset": 2048, 00:10:45.719 "data_size": 63488 00:10:45.719 }, 00:10:45.719 { 00:10:45.719 "name": "BaseBdev3", 00:10:45.719 "uuid": "7a26c093-3123-560a-939d-17fd4653e748", 00:10:45.719 "is_configured": true, 00:10:45.719 "data_offset": 2048, 00:10:45.719 "data_size": 63488 00:10:45.719 }, 00:10:45.719 { 00:10:45.719 "name": "BaseBdev4", 00:10:45.719 "uuid": "d5bfff62-ad5d-510b-b823-3eff9f584998", 00:10:45.719 "is_configured": true, 00:10:45.719 "data_offset": 2048, 00:10:45.719 "data_size": 63488 00:10:45.719 } 00:10:45.719 ] 00:10:45.719 }' 00:10:45.719 04:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.719 04:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.322 04:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:46.322 04:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.322 04:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.322 [2024-12-13 04:26:46.042187] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:46.322 [2024-12-13 04:26:46.042314] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:46.322 [2024-12-13 04:26:46.044964] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:46.322 [2024-12-13 04:26:46.045062] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:46.322 [2024-12-13 04:26:46.045133] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:46.322 [2024-12-13 04:26:46.045177] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:10:46.322 { 00:10:46.322 "results": [ 00:10:46.322 { 00:10:46.322 "job": "raid_bdev1", 00:10:46.322 "core_mask": "0x1", 00:10:46.322 "workload": "randrw", 00:10:46.322 "percentage": 50, 00:10:46.322 "status": "finished", 00:10:46.322 "queue_depth": 1, 00:10:46.322 "io_size": 131072, 00:10:46.322 "runtime": 1.413826, 00:10:46.322 "iops": 14404.884335130348, 00:10:46.322 "mibps": 1800.6105418912935, 00:10:46.322 "io_failed": 1, 00:10:46.322 "io_timeout": 0, 00:10:46.322 "avg_latency_us": 97.36253563699992, 00:10:46.322 "min_latency_us": 25.152838427947597, 00:10:46.322 "max_latency_us": 1337.907423580786 00:10:46.322 } 00:10:46.322 ], 00:10:46.322 "core_count": 1 00:10:46.322 } 00:10:46.322 04:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.322 04:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 85542 00:10:46.322 04:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 85542 ']' 00:10:46.322 04:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 85542 00:10:46.322 04:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:46.322 04:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:46.322 04:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85542 00:10:46.322 04:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:46.322 04:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:46.322 04:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85542' 00:10:46.322 killing process with pid 85542 00:10:46.322 04:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 85542 00:10:46.322 [2024-12-13 04:26:46.081871] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:46.322 04:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 85542 00:10:46.322 [2024-12-13 04:26:46.147976] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:46.590 04:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.3GJFKcwcqO 00:10:46.590 04:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:46.590 04:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:46.590 04:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:10:46.590 04:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:46.590 04:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:46.590 04:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:46.590 04:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:10:46.590 00:10:46.590 real 0m3.534s 00:10:46.590 user 0m4.326s 00:10:46.590 sys 0m0.645s 00:10:46.590 04:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.590 04:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.590 ************************************ 00:10:46.590 END TEST raid_write_error_test 00:10:46.590 ************************************ 00:10:46.590 04:26:46 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:46.590 04:26:46 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:10:46.590 04:26:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:46.590 04:26:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.590 04:26:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:46.590 ************************************ 00:10:46.590 START TEST raid_state_function_test 00:10:46.590 ************************************ 00:10:46.590 04:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:10:46.590 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:46.590 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:46.590 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:46.590 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:46.590 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:46.590 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:46.590 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:46.590 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:46.590 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:46.590 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:46.590 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:46.590 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:46.590 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:46.590 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:46.590 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:46.590 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:46.590 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:46.590 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:46.590 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:46.590 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:46.591 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:46.591 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:46.591 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:46.591 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:46.591 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:46.591 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:46.591 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:46.591 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:46.591 Process raid pid: 85675 00:10:46.591 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=85675 00:10:46.591 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:46.591 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85675' 00:10:46.591 04:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 85675 00:10:46.591 04:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 85675 ']' 00:10:46.591 04:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.591 04:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:46.591 04:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.591 04:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:46.591 04:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.850 [2024-12-13 04:26:46.662831] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:10:46.850 [2024-12-13 04:26:46.662955] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:46.850 [2024-12-13 04:26:46.820093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.850 [2024-12-13 04:26:46.860730] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.110 [2024-12-13 04:26:46.937939] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:47.110 [2024-12-13 04:26:46.937979] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:47.680 04:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:47.680 04:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:47.680 04:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:47.680 04:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.680 04:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.680 [2024-12-13 04:26:47.489182] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:47.680 [2024-12-13 04:26:47.489247] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:47.680 [2024-12-13 04:26:47.489258] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:47.680 [2024-12-13 04:26:47.489270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:47.680 [2024-12-13 04:26:47.489277] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:47.680 [2024-12-13 04:26:47.489289] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:47.680 [2024-12-13 04:26:47.489296] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:47.680 [2024-12-13 04:26:47.489305] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:47.680 04:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.680 04:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:47.680 04:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.680 04:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.680 04:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:47.680 04:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:47.680 04:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.680 04:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.680 04:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.680 04:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.680 04:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.680 04:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.680 04:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.680 04:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.680 04:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.680 04:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.680 04:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.680 "name": "Existed_Raid", 00:10:47.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.680 "strip_size_kb": 0, 00:10:47.680 "state": "configuring", 00:10:47.680 "raid_level": "raid1", 00:10:47.680 "superblock": false, 00:10:47.680 "num_base_bdevs": 4, 00:10:47.680 "num_base_bdevs_discovered": 0, 00:10:47.680 "num_base_bdevs_operational": 4, 00:10:47.680 "base_bdevs_list": [ 00:10:47.680 { 00:10:47.680 "name": "BaseBdev1", 00:10:47.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.680 "is_configured": false, 00:10:47.680 "data_offset": 0, 00:10:47.680 "data_size": 0 00:10:47.680 }, 00:10:47.680 { 00:10:47.680 "name": "BaseBdev2", 00:10:47.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.680 "is_configured": false, 00:10:47.680 "data_offset": 0, 00:10:47.680 "data_size": 0 00:10:47.680 }, 00:10:47.680 { 00:10:47.680 "name": "BaseBdev3", 00:10:47.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.680 "is_configured": false, 00:10:47.680 "data_offset": 0, 00:10:47.680 "data_size": 0 00:10:47.680 }, 00:10:47.680 { 00:10:47.680 "name": "BaseBdev4", 00:10:47.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.680 "is_configured": false, 00:10:47.680 "data_offset": 0, 00:10:47.680 "data_size": 0 00:10:47.680 } 00:10:47.680 ] 00:10:47.680 }' 00:10:47.680 04:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.680 04:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.940 04:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:47.940 04:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.940 04:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.940 [2024-12-13 04:26:47.944474] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:47.940 [2024-12-13 04:26:47.944600] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:10:47.940 04:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.940 04:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:47.940 04:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.940 04:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.200 [2024-12-13 04:26:47.956465] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:48.200 [2024-12-13 04:26:47.956566] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:48.200 [2024-12-13 04:26:47.956592] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:48.200 [2024-12-13 04:26:47.956616] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:48.200 [2024-12-13 04:26:47.956633] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:48.200 [2024-12-13 04:26:47.956653] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:48.200 [2024-12-13 04:26:47.956670] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:48.200 [2024-12-13 04:26:47.956720] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:48.200 04:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.200 04:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:48.200 04:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.200 04:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.200 [2024-12-13 04:26:47.983363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:48.200 BaseBdev1 00:10:48.200 04:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.200 04:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:48.200 04:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:48.200 04:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:48.200 04:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:48.200 04:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:48.200 04:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:48.200 04:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:48.200 04:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.200 04:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.200 04:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.200 04:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:48.200 04:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.200 04:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.200 [ 00:10:48.200 { 00:10:48.200 "name": "BaseBdev1", 00:10:48.200 "aliases": [ 00:10:48.200 "ea6352f2-c70c-4101-9e01-da30c56ec81f" 00:10:48.200 ], 00:10:48.200 "product_name": "Malloc disk", 00:10:48.200 "block_size": 512, 00:10:48.200 "num_blocks": 65536, 00:10:48.200 "uuid": "ea6352f2-c70c-4101-9e01-da30c56ec81f", 00:10:48.200 "assigned_rate_limits": { 00:10:48.200 "rw_ios_per_sec": 0, 00:10:48.200 "rw_mbytes_per_sec": 0, 00:10:48.200 "r_mbytes_per_sec": 0, 00:10:48.200 "w_mbytes_per_sec": 0 00:10:48.200 }, 00:10:48.200 "claimed": true, 00:10:48.200 "claim_type": "exclusive_write", 00:10:48.200 "zoned": false, 00:10:48.200 "supported_io_types": { 00:10:48.200 "read": true, 00:10:48.200 "write": true, 00:10:48.200 "unmap": true, 00:10:48.200 "flush": true, 00:10:48.200 "reset": true, 00:10:48.200 "nvme_admin": false, 00:10:48.200 "nvme_io": false, 00:10:48.200 "nvme_io_md": false, 00:10:48.200 "write_zeroes": true, 00:10:48.200 "zcopy": true, 00:10:48.200 "get_zone_info": false, 00:10:48.200 "zone_management": false, 00:10:48.200 "zone_append": false, 00:10:48.200 "compare": false, 00:10:48.200 "compare_and_write": false, 00:10:48.200 "abort": true, 00:10:48.200 "seek_hole": false, 00:10:48.200 "seek_data": false, 00:10:48.200 "copy": true, 00:10:48.200 "nvme_iov_md": false 00:10:48.200 }, 00:10:48.200 "memory_domains": [ 00:10:48.200 { 00:10:48.200 "dma_device_id": "system", 00:10:48.200 "dma_device_type": 1 00:10:48.200 }, 00:10:48.200 { 00:10:48.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.200 "dma_device_type": 2 00:10:48.200 } 00:10:48.200 ], 00:10:48.200 "driver_specific": {} 00:10:48.200 } 00:10:48.200 ] 00:10:48.200 04:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.200 04:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:48.200 04:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:48.200 04:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.200 04:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.200 04:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:48.200 04:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:48.200 04:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.200 04:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.200 04:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.200 04:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.200 04:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.200 04:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.200 04:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.200 04:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.200 04:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.200 04:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.200 04:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.200 "name": "Existed_Raid", 00:10:48.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.200 "strip_size_kb": 0, 00:10:48.200 "state": "configuring", 00:10:48.200 "raid_level": "raid1", 00:10:48.200 "superblock": false, 00:10:48.200 "num_base_bdevs": 4, 00:10:48.200 "num_base_bdevs_discovered": 1, 00:10:48.200 "num_base_bdevs_operational": 4, 00:10:48.200 "base_bdevs_list": [ 00:10:48.200 { 00:10:48.200 "name": "BaseBdev1", 00:10:48.200 "uuid": "ea6352f2-c70c-4101-9e01-da30c56ec81f", 00:10:48.200 "is_configured": true, 00:10:48.200 "data_offset": 0, 00:10:48.200 "data_size": 65536 00:10:48.200 }, 00:10:48.200 { 00:10:48.200 "name": "BaseBdev2", 00:10:48.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.200 "is_configured": false, 00:10:48.200 "data_offset": 0, 00:10:48.200 "data_size": 0 00:10:48.200 }, 00:10:48.200 { 00:10:48.200 "name": "BaseBdev3", 00:10:48.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.200 "is_configured": false, 00:10:48.200 "data_offset": 0, 00:10:48.200 "data_size": 0 00:10:48.200 }, 00:10:48.200 { 00:10:48.200 "name": "BaseBdev4", 00:10:48.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.201 "is_configured": false, 00:10:48.201 "data_offset": 0, 00:10:48.201 "data_size": 0 00:10:48.201 } 00:10:48.201 ] 00:10:48.201 }' 00:10:48.201 04:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.201 04:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.460 04:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:48.460 04:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.720 04:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.720 [2024-12-13 04:26:48.478572] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:48.720 [2024-12-13 04:26:48.478689] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:10:48.720 04:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.720 04:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:48.720 04:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.720 04:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.720 [2024-12-13 04:26:48.490595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:48.720 [2024-12-13 04:26:48.492868] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:48.720 [2024-12-13 04:26:48.492946] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:48.720 [2024-12-13 04:26:48.492976] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:48.720 [2024-12-13 04:26:48.493000] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:48.720 [2024-12-13 04:26:48.493018] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:48.720 [2024-12-13 04:26:48.493040] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:48.720 04:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.720 04:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:48.720 04:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:48.720 04:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:48.720 04:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.720 04:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.720 04:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:48.720 04:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:48.720 04:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.720 04:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.720 04:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.720 04:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.720 04:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.720 04:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.720 04:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.720 04:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.720 04:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.720 04:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.720 04:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.720 "name": "Existed_Raid", 00:10:48.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.720 "strip_size_kb": 0, 00:10:48.720 "state": "configuring", 00:10:48.720 "raid_level": "raid1", 00:10:48.720 "superblock": false, 00:10:48.720 "num_base_bdevs": 4, 00:10:48.720 "num_base_bdevs_discovered": 1, 00:10:48.720 "num_base_bdevs_operational": 4, 00:10:48.720 "base_bdevs_list": [ 00:10:48.720 { 00:10:48.720 "name": "BaseBdev1", 00:10:48.720 "uuid": "ea6352f2-c70c-4101-9e01-da30c56ec81f", 00:10:48.720 "is_configured": true, 00:10:48.720 "data_offset": 0, 00:10:48.720 "data_size": 65536 00:10:48.720 }, 00:10:48.720 { 00:10:48.720 "name": "BaseBdev2", 00:10:48.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.720 "is_configured": false, 00:10:48.720 "data_offset": 0, 00:10:48.720 "data_size": 0 00:10:48.720 }, 00:10:48.720 { 00:10:48.720 "name": "BaseBdev3", 00:10:48.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.720 "is_configured": false, 00:10:48.720 "data_offset": 0, 00:10:48.720 "data_size": 0 00:10:48.720 }, 00:10:48.720 { 00:10:48.720 "name": "BaseBdev4", 00:10:48.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.720 "is_configured": false, 00:10:48.720 "data_offset": 0, 00:10:48.720 "data_size": 0 00:10:48.720 } 00:10:48.720 ] 00:10:48.720 }' 00:10:48.720 04:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.720 04:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.980 04:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:48.980 04:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.980 04:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.980 [2024-12-13 04:26:48.966499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:48.980 BaseBdev2 00:10:48.980 04:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.980 04:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:48.980 04:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:48.980 04:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:48.980 04:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:48.980 04:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:48.980 04:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:48.980 04:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:48.980 04:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.980 04:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.980 04:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.980 04:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:48.980 04:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.980 04:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.980 [ 00:10:48.980 { 00:10:48.980 "name": "BaseBdev2", 00:10:48.980 "aliases": [ 00:10:48.980 "e4709ce8-d44a-4508-a934-9c65dca256be" 00:10:48.980 ], 00:10:48.980 "product_name": "Malloc disk", 00:10:48.980 "block_size": 512, 00:10:48.980 "num_blocks": 65536, 00:10:48.980 "uuid": "e4709ce8-d44a-4508-a934-9c65dca256be", 00:10:48.980 "assigned_rate_limits": { 00:10:48.980 "rw_ios_per_sec": 0, 00:10:48.980 "rw_mbytes_per_sec": 0, 00:10:48.980 "r_mbytes_per_sec": 0, 00:10:48.980 "w_mbytes_per_sec": 0 00:10:48.980 }, 00:10:48.980 "claimed": true, 00:10:48.980 "claim_type": "exclusive_write", 00:10:48.980 "zoned": false, 00:10:49.240 "supported_io_types": { 00:10:49.240 "read": true, 00:10:49.240 "write": true, 00:10:49.240 "unmap": true, 00:10:49.240 "flush": true, 00:10:49.240 "reset": true, 00:10:49.240 "nvme_admin": false, 00:10:49.240 "nvme_io": false, 00:10:49.240 "nvme_io_md": false, 00:10:49.240 "write_zeroes": true, 00:10:49.240 "zcopy": true, 00:10:49.240 "get_zone_info": false, 00:10:49.240 "zone_management": false, 00:10:49.240 "zone_append": false, 00:10:49.240 "compare": false, 00:10:49.240 "compare_and_write": false, 00:10:49.240 "abort": true, 00:10:49.240 "seek_hole": false, 00:10:49.240 "seek_data": false, 00:10:49.240 "copy": true, 00:10:49.240 "nvme_iov_md": false 00:10:49.240 }, 00:10:49.240 "memory_domains": [ 00:10:49.240 { 00:10:49.240 "dma_device_id": "system", 00:10:49.240 "dma_device_type": 1 00:10:49.240 }, 00:10:49.240 { 00:10:49.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.240 "dma_device_type": 2 00:10:49.240 } 00:10:49.240 ], 00:10:49.240 "driver_specific": {} 00:10:49.240 } 00:10:49.240 ] 00:10:49.240 04:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.240 04:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:49.240 04:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:49.240 04:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:49.240 04:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:49.240 04:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.240 04:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.240 04:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:49.240 04:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:49.240 04:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.240 04:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.240 04:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.240 04:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.240 04:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.240 04:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.240 04:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.240 04:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.240 04:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.240 04:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.240 04:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.240 "name": "Existed_Raid", 00:10:49.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.240 "strip_size_kb": 0, 00:10:49.240 "state": "configuring", 00:10:49.240 "raid_level": "raid1", 00:10:49.240 "superblock": false, 00:10:49.240 "num_base_bdevs": 4, 00:10:49.240 "num_base_bdevs_discovered": 2, 00:10:49.240 "num_base_bdevs_operational": 4, 00:10:49.240 "base_bdevs_list": [ 00:10:49.240 { 00:10:49.240 "name": "BaseBdev1", 00:10:49.240 "uuid": "ea6352f2-c70c-4101-9e01-da30c56ec81f", 00:10:49.240 "is_configured": true, 00:10:49.240 "data_offset": 0, 00:10:49.240 "data_size": 65536 00:10:49.240 }, 00:10:49.240 { 00:10:49.240 "name": "BaseBdev2", 00:10:49.240 "uuid": "e4709ce8-d44a-4508-a934-9c65dca256be", 00:10:49.240 "is_configured": true, 00:10:49.240 "data_offset": 0, 00:10:49.240 "data_size": 65536 00:10:49.240 }, 00:10:49.240 { 00:10:49.240 "name": "BaseBdev3", 00:10:49.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.240 "is_configured": false, 00:10:49.240 "data_offset": 0, 00:10:49.240 "data_size": 0 00:10:49.240 }, 00:10:49.240 { 00:10:49.240 "name": "BaseBdev4", 00:10:49.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.240 "is_configured": false, 00:10:49.240 "data_offset": 0, 00:10:49.240 "data_size": 0 00:10:49.240 } 00:10:49.240 ] 00:10:49.240 }' 00:10:49.240 04:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.240 04:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.500 04:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:49.500 04:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.500 04:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.500 [2024-12-13 04:26:49.456819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:49.500 BaseBdev3 00:10:49.500 04:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.500 04:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:49.500 04:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:49.500 04:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:49.500 04:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:49.500 04:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:49.500 04:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:49.500 04:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:49.500 04:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.500 04:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.500 04:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.500 04:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:49.500 04:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.500 04:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.500 [ 00:10:49.500 { 00:10:49.500 "name": "BaseBdev3", 00:10:49.500 "aliases": [ 00:10:49.500 "e0d3f96b-bad8-4e09-9ae6-e97122c2ac60" 00:10:49.500 ], 00:10:49.500 "product_name": "Malloc disk", 00:10:49.500 "block_size": 512, 00:10:49.500 "num_blocks": 65536, 00:10:49.500 "uuid": "e0d3f96b-bad8-4e09-9ae6-e97122c2ac60", 00:10:49.500 "assigned_rate_limits": { 00:10:49.500 "rw_ios_per_sec": 0, 00:10:49.500 "rw_mbytes_per_sec": 0, 00:10:49.500 "r_mbytes_per_sec": 0, 00:10:49.500 "w_mbytes_per_sec": 0 00:10:49.500 }, 00:10:49.500 "claimed": true, 00:10:49.500 "claim_type": "exclusive_write", 00:10:49.500 "zoned": false, 00:10:49.500 "supported_io_types": { 00:10:49.500 "read": true, 00:10:49.500 "write": true, 00:10:49.500 "unmap": true, 00:10:49.500 "flush": true, 00:10:49.500 "reset": true, 00:10:49.500 "nvme_admin": false, 00:10:49.500 "nvme_io": false, 00:10:49.500 "nvme_io_md": false, 00:10:49.500 "write_zeroes": true, 00:10:49.500 "zcopy": true, 00:10:49.500 "get_zone_info": false, 00:10:49.500 "zone_management": false, 00:10:49.500 "zone_append": false, 00:10:49.500 "compare": false, 00:10:49.500 "compare_and_write": false, 00:10:49.500 "abort": true, 00:10:49.500 "seek_hole": false, 00:10:49.500 "seek_data": false, 00:10:49.500 "copy": true, 00:10:49.500 "nvme_iov_md": false 00:10:49.500 }, 00:10:49.500 "memory_domains": [ 00:10:49.500 { 00:10:49.500 "dma_device_id": "system", 00:10:49.500 "dma_device_type": 1 00:10:49.500 }, 00:10:49.500 { 00:10:49.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.500 "dma_device_type": 2 00:10:49.500 } 00:10:49.500 ], 00:10:49.500 "driver_specific": {} 00:10:49.500 } 00:10:49.500 ] 00:10:49.500 04:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.500 04:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:49.500 04:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:49.500 04:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:49.500 04:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:49.500 04:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.500 04:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.500 04:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:49.500 04:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:49.500 04:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.500 04:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.500 04:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.500 04:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.500 04:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.500 04:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.500 04:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.500 04:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.500 04:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.760 04:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.760 04:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.760 "name": "Existed_Raid", 00:10:49.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.760 "strip_size_kb": 0, 00:10:49.760 "state": "configuring", 00:10:49.760 "raid_level": "raid1", 00:10:49.760 "superblock": false, 00:10:49.760 "num_base_bdevs": 4, 00:10:49.760 "num_base_bdevs_discovered": 3, 00:10:49.760 "num_base_bdevs_operational": 4, 00:10:49.760 "base_bdevs_list": [ 00:10:49.760 { 00:10:49.760 "name": "BaseBdev1", 00:10:49.760 "uuid": "ea6352f2-c70c-4101-9e01-da30c56ec81f", 00:10:49.760 "is_configured": true, 00:10:49.760 "data_offset": 0, 00:10:49.760 "data_size": 65536 00:10:49.760 }, 00:10:49.760 { 00:10:49.760 "name": "BaseBdev2", 00:10:49.760 "uuid": "e4709ce8-d44a-4508-a934-9c65dca256be", 00:10:49.760 "is_configured": true, 00:10:49.760 "data_offset": 0, 00:10:49.760 "data_size": 65536 00:10:49.760 }, 00:10:49.760 { 00:10:49.760 "name": "BaseBdev3", 00:10:49.760 "uuid": "e0d3f96b-bad8-4e09-9ae6-e97122c2ac60", 00:10:49.760 "is_configured": true, 00:10:49.760 "data_offset": 0, 00:10:49.760 "data_size": 65536 00:10:49.760 }, 00:10:49.760 { 00:10:49.760 "name": "BaseBdev4", 00:10:49.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.760 "is_configured": false, 00:10:49.760 "data_offset": 0, 00:10:49.760 "data_size": 0 00:10:49.760 } 00:10:49.760 ] 00:10:49.760 }' 00:10:49.760 04:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.760 04:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.020 04:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:50.020 04:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.020 04:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.020 [2024-12-13 04:26:49.937259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:50.020 [2024-12-13 04:26:49.937326] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:10:50.020 [2024-12-13 04:26:49.937341] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:50.020 [2024-12-13 04:26:49.937703] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:50.020 [2024-12-13 04:26:49.937877] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:10:50.020 [2024-12-13 04:26:49.937902] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:10:50.020 [2024-12-13 04:26:49.938147] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:50.020 BaseBdev4 00:10:50.020 04:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.020 04:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:50.020 04:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:50.020 04:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:50.020 04:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:50.020 04:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:50.020 04:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:50.020 04:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:50.020 04:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.020 04:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.020 04:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.020 04:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:50.020 04:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.020 04:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.020 [ 00:10:50.020 { 00:10:50.020 "name": "BaseBdev4", 00:10:50.020 "aliases": [ 00:10:50.020 "c454e0ea-8a45-4f3e-b318-5a3786308d27" 00:10:50.020 ], 00:10:50.020 "product_name": "Malloc disk", 00:10:50.020 "block_size": 512, 00:10:50.020 "num_blocks": 65536, 00:10:50.020 "uuid": "c454e0ea-8a45-4f3e-b318-5a3786308d27", 00:10:50.020 "assigned_rate_limits": { 00:10:50.020 "rw_ios_per_sec": 0, 00:10:50.020 "rw_mbytes_per_sec": 0, 00:10:50.020 "r_mbytes_per_sec": 0, 00:10:50.020 "w_mbytes_per_sec": 0 00:10:50.020 }, 00:10:50.020 "claimed": true, 00:10:50.020 "claim_type": "exclusive_write", 00:10:50.020 "zoned": false, 00:10:50.020 "supported_io_types": { 00:10:50.020 "read": true, 00:10:50.020 "write": true, 00:10:50.020 "unmap": true, 00:10:50.020 "flush": true, 00:10:50.020 "reset": true, 00:10:50.020 "nvme_admin": false, 00:10:50.020 "nvme_io": false, 00:10:50.020 "nvme_io_md": false, 00:10:50.020 "write_zeroes": true, 00:10:50.020 "zcopy": true, 00:10:50.020 "get_zone_info": false, 00:10:50.020 "zone_management": false, 00:10:50.020 "zone_append": false, 00:10:50.020 "compare": false, 00:10:50.020 "compare_and_write": false, 00:10:50.020 "abort": true, 00:10:50.020 "seek_hole": false, 00:10:50.020 "seek_data": false, 00:10:50.020 "copy": true, 00:10:50.020 "nvme_iov_md": false 00:10:50.020 }, 00:10:50.020 "memory_domains": [ 00:10:50.020 { 00:10:50.020 "dma_device_id": "system", 00:10:50.020 "dma_device_type": 1 00:10:50.020 }, 00:10:50.020 { 00:10:50.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.020 "dma_device_type": 2 00:10:50.020 } 00:10:50.020 ], 00:10:50.020 "driver_specific": {} 00:10:50.020 } 00:10:50.020 ] 00:10:50.020 04:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.020 04:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:50.020 04:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:50.020 04:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:50.020 04:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:50.020 04:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.020 04:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:50.020 04:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:50.020 04:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:50.020 04:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.020 04:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.020 04:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.020 04:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.020 04:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.020 04:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.020 04:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.020 04:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.020 04:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.020 04:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.020 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.020 "name": "Existed_Raid", 00:10:50.020 "uuid": "26cd1c70-2be9-4f64-84b5-a263ae1bfb94", 00:10:50.020 "strip_size_kb": 0, 00:10:50.020 "state": "online", 00:10:50.020 "raid_level": "raid1", 00:10:50.020 "superblock": false, 00:10:50.020 "num_base_bdevs": 4, 00:10:50.020 "num_base_bdevs_discovered": 4, 00:10:50.020 "num_base_bdevs_operational": 4, 00:10:50.020 "base_bdevs_list": [ 00:10:50.020 { 00:10:50.020 "name": "BaseBdev1", 00:10:50.020 "uuid": "ea6352f2-c70c-4101-9e01-da30c56ec81f", 00:10:50.020 "is_configured": true, 00:10:50.020 "data_offset": 0, 00:10:50.020 "data_size": 65536 00:10:50.020 }, 00:10:50.020 { 00:10:50.020 "name": "BaseBdev2", 00:10:50.020 "uuid": "e4709ce8-d44a-4508-a934-9c65dca256be", 00:10:50.020 "is_configured": true, 00:10:50.020 "data_offset": 0, 00:10:50.020 "data_size": 65536 00:10:50.020 }, 00:10:50.020 { 00:10:50.020 "name": "BaseBdev3", 00:10:50.020 "uuid": "e0d3f96b-bad8-4e09-9ae6-e97122c2ac60", 00:10:50.020 "is_configured": true, 00:10:50.020 "data_offset": 0, 00:10:50.020 "data_size": 65536 00:10:50.020 }, 00:10:50.020 { 00:10:50.020 "name": "BaseBdev4", 00:10:50.020 "uuid": "c454e0ea-8a45-4f3e-b318-5a3786308d27", 00:10:50.020 "is_configured": true, 00:10:50.020 "data_offset": 0, 00:10:50.020 "data_size": 65536 00:10:50.020 } 00:10:50.020 ] 00:10:50.020 }' 00:10:50.020 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.020 04:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.589 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:50.589 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:50.589 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:50.589 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:50.589 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:50.589 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:50.589 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:50.589 04:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.589 04:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.589 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:50.589 [2024-12-13 04:26:50.432859] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:50.589 04:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.589 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:50.589 "name": "Existed_Raid", 00:10:50.589 "aliases": [ 00:10:50.589 "26cd1c70-2be9-4f64-84b5-a263ae1bfb94" 00:10:50.589 ], 00:10:50.589 "product_name": "Raid Volume", 00:10:50.589 "block_size": 512, 00:10:50.589 "num_blocks": 65536, 00:10:50.589 "uuid": "26cd1c70-2be9-4f64-84b5-a263ae1bfb94", 00:10:50.589 "assigned_rate_limits": { 00:10:50.589 "rw_ios_per_sec": 0, 00:10:50.589 "rw_mbytes_per_sec": 0, 00:10:50.589 "r_mbytes_per_sec": 0, 00:10:50.589 "w_mbytes_per_sec": 0 00:10:50.589 }, 00:10:50.589 "claimed": false, 00:10:50.589 "zoned": false, 00:10:50.589 "supported_io_types": { 00:10:50.589 "read": true, 00:10:50.589 "write": true, 00:10:50.589 "unmap": false, 00:10:50.589 "flush": false, 00:10:50.589 "reset": true, 00:10:50.589 "nvme_admin": false, 00:10:50.589 "nvme_io": false, 00:10:50.589 "nvme_io_md": false, 00:10:50.589 "write_zeroes": true, 00:10:50.589 "zcopy": false, 00:10:50.589 "get_zone_info": false, 00:10:50.589 "zone_management": false, 00:10:50.589 "zone_append": false, 00:10:50.589 "compare": false, 00:10:50.589 "compare_and_write": false, 00:10:50.589 "abort": false, 00:10:50.589 "seek_hole": false, 00:10:50.589 "seek_data": false, 00:10:50.589 "copy": false, 00:10:50.589 "nvme_iov_md": false 00:10:50.589 }, 00:10:50.589 "memory_domains": [ 00:10:50.589 { 00:10:50.589 "dma_device_id": "system", 00:10:50.589 "dma_device_type": 1 00:10:50.589 }, 00:10:50.589 { 00:10:50.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.589 "dma_device_type": 2 00:10:50.589 }, 00:10:50.589 { 00:10:50.589 "dma_device_id": "system", 00:10:50.589 "dma_device_type": 1 00:10:50.589 }, 00:10:50.589 { 00:10:50.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.589 "dma_device_type": 2 00:10:50.589 }, 00:10:50.589 { 00:10:50.589 "dma_device_id": "system", 00:10:50.589 "dma_device_type": 1 00:10:50.589 }, 00:10:50.589 { 00:10:50.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.589 "dma_device_type": 2 00:10:50.589 }, 00:10:50.589 { 00:10:50.589 "dma_device_id": "system", 00:10:50.589 "dma_device_type": 1 00:10:50.589 }, 00:10:50.589 { 00:10:50.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.589 "dma_device_type": 2 00:10:50.589 } 00:10:50.589 ], 00:10:50.589 "driver_specific": { 00:10:50.589 "raid": { 00:10:50.589 "uuid": "26cd1c70-2be9-4f64-84b5-a263ae1bfb94", 00:10:50.589 "strip_size_kb": 0, 00:10:50.589 "state": "online", 00:10:50.589 "raid_level": "raid1", 00:10:50.589 "superblock": false, 00:10:50.589 "num_base_bdevs": 4, 00:10:50.589 "num_base_bdevs_discovered": 4, 00:10:50.590 "num_base_bdevs_operational": 4, 00:10:50.590 "base_bdevs_list": [ 00:10:50.590 { 00:10:50.590 "name": "BaseBdev1", 00:10:50.590 "uuid": "ea6352f2-c70c-4101-9e01-da30c56ec81f", 00:10:50.590 "is_configured": true, 00:10:50.590 "data_offset": 0, 00:10:50.590 "data_size": 65536 00:10:50.590 }, 00:10:50.590 { 00:10:50.590 "name": "BaseBdev2", 00:10:50.590 "uuid": "e4709ce8-d44a-4508-a934-9c65dca256be", 00:10:50.590 "is_configured": true, 00:10:50.590 "data_offset": 0, 00:10:50.590 "data_size": 65536 00:10:50.590 }, 00:10:50.590 { 00:10:50.590 "name": "BaseBdev3", 00:10:50.590 "uuid": "e0d3f96b-bad8-4e09-9ae6-e97122c2ac60", 00:10:50.590 "is_configured": true, 00:10:50.590 "data_offset": 0, 00:10:50.590 "data_size": 65536 00:10:50.590 }, 00:10:50.590 { 00:10:50.590 "name": "BaseBdev4", 00:10:50.590 "uuid": "c454e0ea-8a45-4f3e-b318-5a3786308d27", 00:10:50.590 "is_configured": true, 00:10:50.590 "data_offset": 0, 00:10:50.590 "data_size": 65536 00:10:50.590 } 00:10:50.590 ] 00:10:50.590 } 00:10:50.590 } 00:10:50.590 }' 00:10:50.590 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:50.590 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:50.590 BaseBdev2 00:10:50.590 BaseBdev3 00:10:50.590 BaseBdev4' 00:10:50.590 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.590 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:50.590 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.590 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:50.590 04:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.590 04:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.590 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.590 04:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.590 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.590 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.590 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.590 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:50.590 04:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.590 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.590 04:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.590 04:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.850 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.850 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.850 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.850 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:50.850 04:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.850 04:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.850 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.850 04:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.850 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.850 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.850 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.850 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.850 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:50.850 04:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.850 04:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.850 04:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.850 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.850 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.850 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:50.850 04:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.850 04:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.850 [2024-12-13 04:26:50.724043] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:50.850 04:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.850 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:50.850 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:50.850 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:50.850 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:50.850 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:50.850 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:50.850 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.850 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:50.850 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:50.850 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:50.850 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:50.850 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.850 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.850 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.850 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.850 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.850 04:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.850 04:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.850 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.850 04:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.850 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.850 "name": "Existed_Raid", 00:10:50.850 "uuid": "26cd1c70-2be9-4f64-84b5-a263ae1bfb94", 00:10:50.850 "strip_size_kb": 0, 00:10:50.850 "state": "online", 00:10:50.850 "raid_level": "raid1", 00:10:50.850 "superblock": false, 00:10:50.850 "num_base_bdevs": 4, 00:10:50.850 "num_base_bdevs_discovered": 3, 00:10:50.850 "num_base_bdevs_operational": 3, 00:10:50.850 "base_bdevs_list": [ 00:10:50.850 { 00:10:50.850 "name": null, 00:10:50.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.850 "is_configured": false, 00:10:50.850 "data_offset": 0, 00:10:50.850 "data_size": 65536 00:10:50.850 }, 00:10:50.850 { 00:10:50.850 "name": "BaseBdev2", 00:10:50.850 "uuid": "e4709ce8-d44a-4508-a934-9c65dca256be", 00:10:50.850 "is_configured": true, 00:10:50.850 "data_offset": 0, 00:10:50.850 "data_size": 65536 00:10:50.850 }, 00:10:50.850 { 00:10:50.850 "name": "BaseBdev3", 00:10:50.850 "uuid": "e0d3f96b-bad8-4e09-9ae6-e97122c2ac60", 00:10:50.850 "is_configured": true, 00:10:50.850 "data_offset": 0, 00:10:50.850 "data_size": 65536 00:10:50.850 }, 00:10:50.850 { 00:10:50.850 "name": "BaseBdev4", 00:10:50.850 "uuid": "c454e0ea-8a45-4f3e-b318-5a3786308d27", 00:10:50.850 "is_configured": true, 00:10:50.850 "data_offset": 0, 00:10:50.850 "data_size": 65536 00:10:50.850 } 00:10:50.850 ] 00:10:50.850 }' 00:10:50.850 04:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.850 04:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.420 [2024-12-13 04:26:51.204580] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.420 [2024-12-13 04:26:51.261238] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.420 [2024-12-13 04:26:51.341104] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:51.420 [2024-12-13 04:26:51.341253] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:51.420 [2024-12-13 04:26:51.362393] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:51.420 [2024-12-13 04:26:51.362530] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:51.420 [2024-12-13 04:26:51.362580] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.420 BaseBdev2 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.420 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.680 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.680 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:51.680 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.680 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.680 [ 00:10:51.680 { 00:10:51.680 "name": "BaseBdev2", 00:10:51.680 "aliases": [ 00:10:51.680 "3655520e-b9db-420e-b364-cef94b5e793f" 00:10:51.680 ], 00:10:51.680 "product_name": "Malloc disk", 00:10:51.680 "block_size": 512, 00:10:51.680 "num_blocks": 65536, 00:10:51.681 "uuid": "3655520e-b9db-420e-b364-cef94b5e793f", 00:10:51.681 "assigned_rate_limits": { 00:10:51.681 "rw_ios_per_sec": 0, 00:10:51.681 "rw_mbytes_per_sec": 0, 00:10:51.681 "r_mbytes_per_sec": 0, 00:10:51.681 "w_mbytes_per_sec": 0 00:10:51.681 }, 00:10:51.681 "claimed": false, 00:10:51.681 "zoned": false, 00:10:51.681 "supported_io_types": { 00:10:51.681 "read": true, 00:10:51.681 "write": true, 00:10:51.681 "unmap": true, 00:10:51.681 "flush": true, 00:10:51.681 "reset": true, 00:10:51.681 "nvme_admin": false, 00:10:51.681 "nvme_io": false, 00:10:51.681 "nvme_io_md": false, 00:10:51.681 "write_zeroes": true, 00:10:51.681 "zcopy": true, 00:10:51.681 "get_zone_info": false, 00:10:51.681 "zone_management": false, 00:10:51.681 "zone_append": false, 00:10:51.681 "compare": false, 00:10:51.681 "compare_and_write": false, 00:10:51.681 "abort": true, 00:10:51.681 "seek_hole": false, 00:10:51.681 "seek_data": false, 00:10:51.681 "copy": true, 00:10:51.681 "nvme_iov_md": false 00:10:51.681 }, 00:10:51.681 "memory_domains": [ 00:10:51.681 { 00:10:51.681 "dma_device_id": "system", 00:10:51.681 "dma_device_type": 1 00:10:51.681 }, 00:10:51.681 { 00:10:51.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.681 "dma_device_type": 2 00:10:51.681 } 00:10:51.681 ], 00:10:51.681 "driver_specific": {} 00:10:51.681 } 00:10:51.681 ] 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.681 BaseBdev3 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.681 [ 00:10:51.681 { 00:10:51.681 "name": "BaseBdev3", 00:10:51.681 "aliases": [ 00:10:51.681 "ac5cee05-f5d0-4545-a99d-3faa46bd91af" 00:10:51.681 ], 00:10:51.681 "product_name": "Malloc disk", 00:10:51.681 "block_size": 512, 00:10:51.681 "num_blocks": 65536, 00:10:51.681 "uuid": "ac5cee05-f5d0-4545-a99d-3faa46bd91af", 00:10:51.681 "assigned_rate_limits": { 00:10:51.681 "rw_ios_per_sec": 0, 00:10:51.681 "rw_mbytes_per_sec": 0, 00:10:51.681 "r_mbytes_per_sec": 0, 00:10:51.681 "w_mbytes_per_sec": 0 00:10:51.681 }, 00:10:51.681 "claimed": false, 00:10:51.681 "zoned": false, 00:10:51.681 "supported_io_types": { 00:10:51.681 "read": true, 00:10:51.681 "write": true, 00:10:51.681 "unmap": true, 00:10:51.681 "flush": true, 00:10:51.681 "reset": true, 00:10:51.681 "nvme_admin": false, 00:10:51.681 "nvme_io": false, 00:10:51.681 "nvme_io_md": false, 00:10:51.681 "write_zeroes": true, 00:10:51.681 "zcopy": true, 00:10:51.681 "get_zone_info": false, 00:10:51.681 "zone_management": false, 00:10:51.681 "zone_append": false, 00:10:51.681 "compare": false, 00:10:51.681 "compare_and_write": false, 00:10:51.681 "abort": true, 00:10:51.681 "seek_hole": false, 00:10:51.681 "seek_data": false, 00:10:51.681 "copy": true, 00:10:51.681 "nvme_iov_md": false 00:10:51.681 }, 00:10:51.681 "memory_domains": [ 00:10:51.681 { 00:10:51.681 "dma_device_id": "system", 00:10:51.681 "dma_device_type": 1 00:10:51.681 }, 00:10:51.681 { 00:10:51.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.681 "dma_device_type": 2 00:10:51.681 } 00:10:51.681 ], 00:10:51.681 "driver_specific": {} 00:10:51.681 } 00:10:51.681 ] 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.681 BaseBdev4 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.681 [ 00:10:51.681 { 00:10:51.681 "name": "BaseBdev4", 00:10:51.681 "aliases": [ 00:10:51.681 "23d4f0aa-b0d9-4a1c-9d94-4bc4ddc19628" 00:10:51.681 ], 00:10:51.681 "product_name": "Malloc disk", 00:10:51.681 "block_size": 512, 00:10:51.681 "num_blocks": 65536, 00:10:51.681 "uuid": "23d4f0aa-b0d9-4a1c-9d94-4bc4ddc19628", 00:10:51.681 "assigned_rate_limits": { 00:10:51.681 "rw_ios_per_sec": 0, 00:10:51.681 "rw_mbytes_per_sec": 0, 00:10:51.681 "r_mbytes_per_sec": 0, 00:10:51.681 "w_mbytes_per_sec": 0 00:10:51.681 }, 00:10:51.681 "claimed": false, 00:10:51.681 "zoned": false, 00:10:51.681 "supported_io_types": { 00:10:51.681 "read": true, 00:10:51.681 "write": true, 00:10:51.681 "unmap": true, 00:10:51.681 "flush": true, 00:10:51.681 "reset": true, 00:10:51.681 "nvme_admin": false, 00:10:51.681 "nvme_io": false, 00:10:51.681 "nvme_io_md": false, 00:10:51.681 "write_zeroes": true, 00:10:51.681 "zcopy": true, 00:10:51.681 "get_zone_info": false, 00:10:51.681 "zone_management": false, 00:10:51.681 "zone_append": false, 00:10:51.681 "compare": false, 00:10:51.681 "compare_and_write": false, 00:10:51.681 "abort": true, 00:10:51.681 "seek_hole": false, 00:10:51.681 "seek_data": false, 00:10:51.681 "copy": true, 00:10:51.681 "nvme_iov_md": false 00:10:51.681 }, 00:10:51.681 "memory_domains": [ 00:10:51.681 { 00:10:51.681 "dma_device_id": "system", 00:10:51.681 "dma_device_type": 1 00:10:51.681 }, 00:10:51.681 { 00:10:51.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.681 "dma_device_type": 2 00:10:51.681 } 00:10:51.681 ], 00:10:51.681 "driver_specific": {} 00:10:51.681 } 00:10:51.681 ] 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.681 [2024-12-13 04:26:51.586276] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:51.681 [2024-12-13 04:26:51.586399] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:51.681 [2024-12-13 04:26:51.586438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:51.681 [2024-12-13 04:26:51.588613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:51.681 [2024-12-13 04:26:51.588710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:51.681 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.682 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.682 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:51.682 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:51.682 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.682 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.682 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.682 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.682 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.682 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.682 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.682 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.682 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.682 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.682 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.682 "name": "Existed_Raid", 00:10:51.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.682 "strip_size_kb": 0, 00:10:51.682 "state": "configuring", 00:10:51.682 "raid_level": "raid1", 00:10:51.682 "superblock": false, 00:10:51.682 "num_base_bdevs": 4, 00:10:51.682 "num_base_bdevs_discovered": 3, 00:10:51.682 "num_base_bdevs_operational": 4, 00:10:51.682 "base_bdevs_list": [ 00:10:51.682 { 00:10:51.682 "name": "BaseBdev1", 00:10:51.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.682 "is_configured": false, 00:10:51.682 "data_offset": 0, 00:10:51.682 "data_size": 0 00:10:51.682 }, 00:10:51.682 { 00:10:51.682 "name": "BaseBdev2", 00:10:51.682 "uuid": "3655520e-b9db-420e-b364-cef94b5e793f", 00:10:51.682 "is_configured": true, 00:10:51.682 "data_offset": 0, 00:10:51.682 "data_size": 65536 00:10:51.682 }, 00:10:51.682 { 00:10:51.682 "name": "BaseBdev3", 00:10:51.682 "uuid": "ac5cee05-f5d0-4545-a99d-3faa46bd91af", 00:10:51.682 "is_configured": true, 00:10:51.682 "data_offset": 0, 00:10:51.682 "data_size": 65536 00:10:51.682 }, 00:10:51.682 { 00:10:51.682 "name": "BaseBdev4", 00:10:51.682 "uuid": "23d4f0aa-b0d9-4a1c-9d94-4bc4ddc19628", 00:10:51.682 "is_configured": true, 00:10:51.682 "data_offset": 0, 00:10:51.682 "data_size": 65536 00:10:51.682 } 00:10:51.682 ] 00:10:51.682 }' 00:10:51.682 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.682 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.251 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:52.251 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.251 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.251 [2024-12-13 04:26:51.981570] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:52.251 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.252 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:52.252 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.252 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.252 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:52.252 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:52.252 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.252 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.252 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.252 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.252 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.252 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.252 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.252 04:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.252 04:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.252 04:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.252 04:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.252 "name": "Existed_Raid", 00:10:52.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.252 "strip_size_kb": 0, 00:10:52.252 "state": "configuring", 00:10:52.252 "raid_level": "raid1", 00:10:52.252 "superblock": false, 00:10:52.252 "num_base_bdevs": 4, 00:10:52.252 "num_base_bdevs_discovered": 2, 00:10:52.252 "num_base_bdevs_operational": 4, 00:10:52.252 "base_bdevs_list": [ 00:10:52.252 { 00:10:52.252 "name": "BaseBdev1", 00:10:52.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.252 "is_configured": false, 00:10:52.252 "data_offset": 0, 00:10:52.252 "data_size": 0 00:10:52.252 }, 00:10:52.252 { 00:10:52.252 "name": null, 00:10:52.252 "uuid": "3655520e-b9db-420e-b364-cef94b5e793f", 00:10:52.252 "is_configured": false, 00:10:52.252 "data_offset": 0, 00:10:52.252 "data_size": 65536 00:10:52.252 }, 00:10:52.252 { 00:10:52.252 "name": "BaseBdev3", 00:10:52.252 "uuid": "ac5cee05-f5d0-4545-a99d-3faa46bd91af", 00:10:52.252 "is_configured": true, 00:10:52.252 "data_offset": 0, 00:10:52.252 "data_size": 65536 00:10:52.252 }, 00:10:52.252 { 00:10:52.252 "name": "BaseBdev4", 00:10:52.252 "uuid": "23d4f0aa-b0d9-4a1c-9d94-4bc4ddc19628", 00:10:52.252 "is_configured": true, 00:10:52.252 "data_offset": 0, 00:10:52.252 "data_size": 65536 00:10:52.252 } 00:10:52.252 ] 00:10:52.252 }' 00:10:52.252 04:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.252 04:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.512 04:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.512 04:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.512 04:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.512 04:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:52.512 04:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.512 04:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:52.512 04:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:52.512 04:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.512 04:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.512 [2024-12-13 04:26:52.429528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:52.512 BaseBdev1 00:10:52.512 04:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.512 04:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:52.512 04:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:52.512 04:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:52.512 04:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:52.512 04:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:52.512 04:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:52.512 04:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:52.512 04:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.512 04:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.512 04:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.512 04:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:52.512 04:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.512 04:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.512 [ 00:10:52.512 { 00:10:52.512 "name": "BaseBdev1", 00:10:52.512 "aliases": [ 00:10:52.512 "fb068bd1-44f5-4d8a-88e4-5c7e22d31b68" 00:10:52.512 ], 00:10:52.512 "product_name": "Malloc disk", 00:10:52.512 "block_size": 512, 00:10:52.512 "num_blocks": 65536, 00:10:52.512 "uuid": "fb068bd1-44f5-4d8a-88e4-5c7e22d31b68", 00:10:52.512 "assigned_rate_limits": { 00:10:52.512 "rw_ios_per_sec": 0, 00:10:52.512 "rw_mbytes_per_sec": 0, 00:10:52.512 "r_mbytes_per_sec": 0, 00:10:52.512 "w_mbytes_per_sec": 0 00:10:52.512 }, 00:10:52.512 "claimed": true, 00:10:52.512 "claim_type": "exclusive_write", 00:10:52.512 "zoned": false, 00:10:52.512 "supported_io_types": { 00:10:52.512 "read": true, 00:10:52.512 "write": true, 00:10:52.512 "unmap": true, 00:10:52.512 "flush": true, 00:10:52.512 "reset": true, 00:10:52.512 "nvme_admin": false, 00:10:52.512 "nvme_io": false, 00:10:52.512 "nvme_io_md": false, 00:10:52.512 "write_zeroes": true, 00:10:52.512 "zcopy": true, 00:10:52.512 "get_zone_info": false, 00:10:52.512 "zone_management": false, 00:10:52.512 "zone_append": false, 00:10:52.512 "compare": false, 00:10:52.512 "compare_and_write": false, 00:10:52.512 "abort": true, 00:10:52.512 "seek_hole": false, 00:10:52.512 "seek_data": false, 00:10:52.512 "copy": true, 00:10:52.512 "nvme_iov_md": false 00:10:52.512 }, 00:10:52.512 "memory_domains": [ 00:10:52.512 { 00:10:52.512 "dma_device_id": "system", 00:10:52.512 "dma_device_type": 1 00:10:52.512 }, 00:10:52.512 { 00:10:52.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.512 "dma_device_type": 2 00:10:52.512 } 00:10:52.512 ], 00:10:52.512 "driver_specific": {} 00:10:52.512 } 00:10:52.512 ] 00:10:52.512 04:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.512 04:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:52.512 04:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:52.512 04:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.512 04:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.512 04:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:52.513 04:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:52.513 04:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.513 04:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.513 04:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.513 04:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.513 04:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.513 04:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.513 04:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.513 04:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.513 04:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.513 04:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.513 04:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.513 "name": "Existed_Raid", 00:10:52.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.513 "strip_size_kb": 0, 00:10:52.513 "state": "configuring", 00:10:52.513 "raid_level": "raid1", 00:10:52.513 "superblock": false, 00:10:52.513 "num_base_bdevs": 4, 00:10:52.513 "num_base_bdevs_discovered": 3, 00:10:52.513 "num_base_bdevs_operational": 4, 00:10:52.513 "base_bdevs_list": [ 00:10:52.513 { 00:10:52.513 "name": "BaseBdev1", 00:10:52.513 "uuid": "fb068bd1-44f5-4d8a-88e4-5c7e22d31b68", 00:10:52.513 "is_configured": true, 00:10:52.513 "data_offset": 0, 00:10:52.513 "data_size": 65536 00:10:52.513 }, 00:10:52.513 { 00:10:52.513 "name": null, 00:10:52.513 "uuid": "3655520e-b9db-420e-b364-cef94b5e793f", 00:10:52.513 "is_configured": false, 00:10:52.513 "data_offset": 0, 00:10:52.513 "data_size": 65536 00:10:52.513 }, 00:10:52.513 { 00:10:52.513 "name": "BaseBdev3", 00:10:52.513 "uuid": "ac5cee05-f5d0-4545-a99d-3faa46bd91af", 00:10:52.513 "is_configured": true, 00:10:52.513 "data_offset": 0, 00:10:52.513 "data_size": 65536 00:10:52.513 }, 00:10:52.513 { 00:10:52.513 "name": "BaseBdev4", 00:10:52.513 "uuid": "23d4f0aa-b0d9-4a1c-9d94-4bc4ddc19628", 00:10:52.513 "is_configured": true, 00:10:52.513 "data_offset": 0, 00:10:52.513 "data_size": 65536 00:10:52.513 } 00:10:52.513 ] 00:10:52.513 }' 00:10:52.513 04:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.513 04:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.083 04:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:53.083 04:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.083 04:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.083 04:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.083 04:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.083 04:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:53.083 04:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:53.083 04:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.083 04:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.083 [2024-12-13 04:26:52.924728] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:53.083 04:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.083 04:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:53.083 04:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.083 04:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.083 04:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:53.083 04:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:53.083 04:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.083 04:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.083 04:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.083 04:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.083 04:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.083 04:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.083 04:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.083 04:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.083 04:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.083 04:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.083 04:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.083 "name": "Existed_Raid", 00:10:53.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.083 "strip_size_kb": 0, 00:10:53.083 "state": "configuring", 00:10:53.083 "raid_level": "raid1", 00:10:53.083 "superblock": false, 00:10:53.083 "num_base_bdevs": 4, 00:10:53.083 "num_base_bdevs_discovered": 2, 00:10:53.083 "num_base_bdevs_operational": 4, 00:10:53.083 "base_bdevs_list": [ 00:10:53.083 { 00:10:53.083 "name": "BaseBdev1", 00:10:53.083 "uuid": "fb068bd1-44f5-4d8a-88e4-5c7e22d31b68", 00:10:53.083 "is_configured": true, 00:10:53.083 "data_offset": 0, 00:10:53.083 "data_size": 65536 00:10:53.083 }, 00:10:53.083 { 00:10:53.083 "name": null, 00:10:53.083 "uuid": "3655520e-b9db-420e-b364-cef94b5e793f", 00:10:53.083 "is_configured": false, 00:10:53.083 "data_offset": 0, 00:10:53.083 "data_size": 65536 00:10:53.083 }, 00:10:53.083 { 00:10:53.083 "name": null, 00:10:53.083 "uuid": "ac5cee05-f5d0-4545-a99d-3faa46bd91af", 00:10:53.083 "is_configured": false, 00:10:53.083 "data_offset": 0, 00:10:53.083 "data_size": 65536 00:10:53.083 }, 00:10:53.083 { 00:10:53.083 "name": "BaseBdev4", 00:10:53.083 "uuid": "23d4f0aa-b0d9-4a1c-9d94-4bc4ddc19628", 00:10:53.083 "is_configured": true, 00:10:53.083 "data_offset": 0, 00:10:53.083 "data_size": 65536 00:10:53.083 } 00:10:53.083 ] 00:10:53.083 }' 00:10:53.083 04:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.083 04:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.342 04:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:53.342 04:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.342 04:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.342 04:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.602 04:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.602 04:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:53.602 04:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:53.602 04:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.602 04:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.602 [2024-12-13 04:26:53.400550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:53.602 04:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.602 04:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:53.602 04:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.602 04:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.602 04:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:53.602 04:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:53.602 04:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.602 04:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.602 04:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.602 04:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.602 04:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.602 04:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.602 04:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.602 04:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.602 04:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.602 04:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.602 04:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.602 "name": "Existed_Raid", 00:10:53.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.602 "strip_size_kb": 0, 00:10:53.602 "state": "configuring", 00:10:53.602 "raid_level": "raid1", 00:10:53.602 "superblock": false, 00:10:53.602 "num_base_bdevs": 4, 00:10:53.602 "num_base_bdevs_discovered": 3, 00:10:53.602 "num_base_bdevs_operational": 4, 00:10:53.602 "base_bdevs_list": [ 00:10:53.602 { 00:10:53.602 "name": "BaseBdev1", 00:10:53.602 "uuid": "fb068bd1-44f5-4d8a-88e4-5c7e22d31b68", 00:10:53.602 "is_configured": true, 00:10:53.602 "data_offset": 0, 00:10:53.602 "data_size": 65536 00:10:53.602 }, 00:10:53.602 { 00:10:53.602 "name": null, 00:10:53.602 "uuid": "3655520e-b9db-420e-b364-cef94b5e793f", 00:10:53.602 "is_configured": false, 00:10:53.602 "data_offset": 0, 00:10:53.602 "data_size": 65536 00:10:53.602 }, 00:10:53.602 { 00:10:53.602 "name": "BaseBdev3", 00:10:53.602 "uuid": "ac5cee05-f5d0-4545-a99d-3faa46bd91af", 00:10:53.602 "is_configured": true, 00:10:53.602 "data_offset": 0, 00:10:53.602 "data_size": 65536 00:10:53.602 }, 00:10:53.602 { 00:10:53.602 "name": "BaseBdev4", 00:10:53.602 "uuid": "23d4f0aa-b0d9-4a1c-9d94-4bc4ddc19628", 00:10:53.602 "is_configured": true, 00:10:53.602 "data_offset": 0, 00:10:53.602 "data_size": 65536 00:10:53.602 } 00:10:53.602 ] 00:10:53.602 }' 00:10:53.602 04:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.602 04:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.862 04:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.862 04:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.862 04:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.862 04:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:53.862 04:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.121 04:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:54.121 04:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:54.121 04:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.121 04:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.121 [2024-12-13 04:26:53.900338] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:54.121 04:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.121 04:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:54.121 04:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.121 04:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.121 04:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:54.121 04:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:54.121 04:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.121 04:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.121 04:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.121 04:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.121 04:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.121 04:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.121 04:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.121 04:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.122 04:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.122 04:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.122 04:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.122 "name": "Existed_Raid", 00:10:54.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.122 "strip_size_kb": 0, 00:10:54.122 "state": "configuring", 00:10:54.122 "raid_level": "raid1", 00:10:54.122 "superblock": false, 00:10:54.122 "num_base_bdevs": 4, 00:10:54.122 "num_base_bdevs_discovered": 2, 00:10:54.122 "num_base_bdevs_operational": 4, 00:10:54.122 "base_bdevs_list": [ 00:10:54.122 { 00:10:54.122 "name": null, 00:10:54.122 "uuid": "fb068bd1-44f5-4d8a-88e4-5c7e22d31b68", 00:10:54.122 "is_configured": false, 00:10:54.122 "data_offset": 0, 00:10:54.122 "data_size": 65536 00:10:54.122 }, 00:10:54.122 { 00:10:54.122 "name": null, 00:10:54.122 "uuid": "3655520e-b9db-420e-b364-cef94b5e793f", 00:10:54.122 "is_configured": false, 00:10:54.122 "data_offset": 0, 00:10:54.122 "data_size": 65536 00:10:54.122 }, 00:10:54.122 { 00:10:54.122 "name": "BaseBdev3", 00:10:54.122 "uuid": "ac5cee05-f5d0-4545-a99d-3faa46bd91af", 00:10:54.122 "is_configured": true, 00:10:54.122 "data_offset": 0, 00:10:54.122 "data_size": 65536 00:10:54.122 }, 00:10:54.122 { 00:10:54.122 "name": "BaseBdev4", 00:10:54.122 "uuid": "23d4f0aa-b0d9-4a1c-9d94-4bc4ddc19628", 00:10:54.122 "is_configured": true, 00:10:54.122 "data_offset": 0, 00:10:54.122 "data_size": 65536 00:10:54.122 } 00:10:54.122 ] 00:10:54.122 }' 00:10:54.122 04:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.122 04:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.381 04:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.381 04:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:54.381 04:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.381 04:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.381 04:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.641 04:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:54.641 04:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:54.641 04:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.641 04:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.642 [2024-12-13 04:26:54.407329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:54.642 04:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.642 04:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:54.642 04:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.642 04:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.642 04:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:54.642 04:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:54.642 04:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.642 04:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.642 04:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.642 04:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.642 04:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.642 04:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.642 04:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.642 04:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.642 04:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.642 04:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.642 04:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.642 "name": "Existed_Raid", 00:10:54.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.642 "strip_size_kb": 0, 00:10:54.642 "state": "configuring", 00:10:54.642 "raid_level": "raid1", 00:10:54.642 "superblock": false, 00:10:54.642 "num_base_bdevs": 4, 00:10:54.642 "num_base_bdevs_discovered": 3, 00:10:54.642 "num_base_bdevs_operational": 4, 00:10:54.642 "base_bdevs_list": [ 00:10:54.642 { 00:10:54.642 "name": null, 00:10:54.642 "uuid": "fb068bd1-44f5-4d8a-88e4-5c7e22d31b68", 00:10:54.642 "is_configured": false, 00:10:54.642 "data_offset": 0, 00:10:54.642 "data_size": 65536 00:10:54.642 }, 00:10:54.642 { 00:10:54.642 "name": "BaseBdev2", 00:10:54.642 "uuid": "3655520e-b9db-420e-b364-cef94b5e793f", 00:10:54.642 "is_configured": true, 00:10:54.642 "data_offset": 0, 00:10:54.642 "data_size": 65536 00:10:54.642 }, 00:10:54.642 { 00:10:54.642 "name": "BaseBdev3", 00:10:54.642 "uuid": "ac5cee05-f5d0-4545-a99d-3faa46bd91af", 00:10:54.642 "is_configured": true, 00:10:54.642 "data_offset": 0, 00:10:54.642 "data_size": 65536 00:10:54.642 }, 00:10:54.642 { 00:10:54.642 "name": "BaseBdev4", 00:10:54.642 "uuid": "23d4f0aa-b0d9-4a1c-9d94-4bc4ddc19628", 00:10:54.642 "is_configured": true, 00:10:54.642 "data_offset": 0, 00:10:54.642 "data_size": 65536 00:10:54.642 } 00:10:54.642 ] 00:10:54.642 }' 00:10:54.642 04:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.642 04:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.902 04:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.902 04:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:54.902 04:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.902 04:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.902 04:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.902 04:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:54.902 04:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.902 04:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.902 04:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.902 04:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:54.902 04:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.902 04:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u fb068bd1-44f5-4d8a-88e4-5c7e22d31b68 00:10:54.902 04:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.902 04:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.162 [2024-12-13 04:26:54.919227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:55.162 [2024-12-13 04:26:54.919340] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:10:55.162 [2024-12-13 04:26:54.919372] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:55.162 [2024-12-13 04:26:54.919739] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:10:55.162 [2024-12-13 04:26:54.919940] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:10:55.162 [2024-12-13 04:26:54.919979] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:10:55.162 [2024-12-13 04:26:54.920243] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:55.162 NewBaseBdev 00:10:55.162 04:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.162 04:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:55.162 04:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:55.162 04:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:55.162 04:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:55.162 04:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:55.162 04:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:55.162 04:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:55.162 04:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.162 04:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.162 04:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.162 04:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:55.162 04:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.162 04:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.162 [ 00:10:55.162 { 00:10:55.162 "name": "NewBaseBdev", 00:10:55.162 "aliases": [ 00:10:55.162 "fb068bd1-44f5-4d8a-88e4-5c7e22d31b68" 00:10:55.162 ], 00:10:55.162 "product_name": "Malloc disk", 00:10:55.162 "block_size": 512, 00:10:55.162 "num_blocks": 65536, 00:10:55.162 "uuid": "fb068bd1-44f5-4d8a-88e4-5c7e22d31b68", 00:10:55.162 "assigned_rate_limits": { 00:10:55.162 "rw_ios_per_sec": 0, 00:10:55.162 "rw_mbytes_per_sec": 0, 00:10:55.162 "r_mbytes_per_sec": 0, 00:10:55.162 "w_mbytes_per_sec": 0 00:10:55.162 }, 00:10:55.162 "claimed": true, 00:10:55.162 "claim_type": "exclusive_write", 00:10:55.162 "zoned": false, 00:10:55.162 "supported_io_types": { 00:10:55.162 "read": true, 00:10:55.162 "write": true, 00:10:55.162 "unmap": true, 00:10:55.162 "flush": true, 00:10:55.162 "reset": true, 00:10:55.162 "nvme_admin": false, 00:10:55.162 "nvme_io": false, 00:10:55.162 "nvme_io_md": false, 00:10:55.162 "write_zeroes": true, 00:10:55.162 "zcopy": true, 00:10:55.162 "get_zone_info": false, 00:10:55.162 "zone_management": false, 00:10:55.162 "zone_append": false, 00:10:55.162 "compare": false, 00:10:55.162 "compare_and_write": false, 00:10:55.162 "abort": true, 00:10:55.162 "seek_hole": false, 00:10:55.162 "seek_data": false, 00:10:55.162 "copy": true, 00:10:55.162 "nvme_iov_md": false 00:10:55.162 }, 00:10:55.162 "memory_domains": [ 00:10:55.162 { 00:10:55.162 "dma_device_id": "system", 00:10:55.162 "dma_device_type": 1 00:10:55.162 }, 00:10:55.162 { 00:10:55.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.162 "dma_device_type": 2 00:10:55.162 } 00:10:55.162 ], 00:10:55.162 "driver_specific": {} 00:10:55.162 } 00:10:55.162 ] 00:10:55.162 04:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.162 04:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:55.162 04:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:55.162 04:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.162 04:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:55.162 04:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:55.162 04:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:55.162 04:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.162 04:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.162 04:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.162 04:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.162 04:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.162 04:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.162 04:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.162 04:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.162 04:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.162 04:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.162 04:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.162 "name": "Existed_Raid", 00:10:55.162 "uuid": "9b60a0d5-58ea-47e2-b16b-8c5517bfd352", 00:10:55.162 "strip_size_kb": 0, 00:10:55.162 "state": "online", 00:10:55.162 "raid_level": "raid1", 00:10:55.162 "superblock": false, 00:10:55.162 "num_base_bdevs": 4, 00:10:55.162 "num_base_bdevs_discovered": 4, 00:10:55.162 "num_base_bdevs_operational": 4, 00:10:55.162 "base_bdevs_list": [ 00:10:55.162 { 00:10:55.162 "name": "NewBaseBdev", 00:10:55.162 "uuid": "fb068bd1-44f5-4d8a-88e4-5c7e22d31b68", 00:10:55.162 "is_configured": true, 00:10:55.162 "data_offset": 0, 00:10:55.162 "data_size": 65536 00:10:55.162 }, 00:10:55.162 { 00:10:55.162 "name": "BaseBdev2", 00:10:55.162 "uuid": "3655520e-b9db-420e-b364-cef94b5e793f", 00:10:55.162 "is_configured": true, 00:10:55.162 "data_offset": 0, 00:10:55.162 "data_size": 65536 00:10:55.162 }, 00:10:55.162 { 00:10:55.162 "name": "BaseBdev3", 00:10:55.162 "uuid": "ac5cee05-f5d0-4545-a99d-3faa46bd91af", 00:10:55.162 "is_configured": true, 00:10:55.162 "data_offset": 0, 00:10:55.162 "data_size": 65536 00:10:55.162 }, 00:10:55.162 { 00:10:55.162 "name": "BaseBdev4", 00:10:55.163 "uuid": "23d4f0aa-b0d9-4a1c-9d94-4bc4ddc19628", 00:10:55.163 "is_configured": true, 00:10:55.163 "data_offset": 0, 00:10:55.163 "data_size": 65536 00:10:55.163 } 00:10:55.163 ] 00:10:55.163 }' 00:10:55.163 04:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.163 04:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.422 04:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:55.422 04:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:55.422 04:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:55.422 04:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:55.422 04:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:55.422 04:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:55.422 04:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:55.422 04:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:55.422 04:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.422 04:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.422 [2024-12-13 04:26:55.382764] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:55.422 04:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.422 04:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:55.422 "name": "Existed_Raid", 00:10:55.422 "aliases": [ 00:10:55.422 "9b60a0d5-58ea-47e2-b16b-8c5517bfd352" 00:10:55.422 ], 00:10:55.422 "product_name": "Raid Volume", 00:10:55.422 "block_size": 512, 00:10:55.422 "num_blocks": 65536, 00:10:55.422 "uuid": "9b60a0d5-58ea-47e2-b16b-8c5517bfd352", 00:10:55.422 "assigned_rate_limits": { 00:10:55.422 "rw_ios_per_sec": 0, 00:10:55.422 "rw_mbytes_per_sec": 0, 00:10:55.422 "r_mbytes_per_sec": 0, 00:10:55.422 "w_mbytes_per_sec": 0 00:10:55.422 }, 00:10:55.422 "claimed": false, 00:10:55.422 "zoned": false, 00:10:55.422 "supported_io_types": { 00:10:55.422 "read": true, 00:10:55.422 "write": true, 00:10:55.422 "unmap": false, 00:10:55.422 "flush": false, 00:10:55.422 "reset": true, 00:10:55.422 "nvme_admin": false, 00:10:55.422 "nvme_io": false, 00:10:55.422 "nvme_io_md": false, 00:10:55.422 "write_zeroes": true, 00:10:55.422 "zcopy": false, 00:10:55.422 "get_zone_info": false, 00:10:55.422 "zone_management": false, 00:10:55.422 "zone_append": false, 00:10:55.422 "compare": false, 00:10:55.422 "compare_and_write": false, 00:10:55.422 "abort": false, 00:10:55.422 "seek_hole": false, 00:10:55.422 "seek_data": false, 00:10:55.422 "copy": false, 00:10:55.422 "nvme_iov_md": false 00:10:55.422 }, 00:10:55.422 "memory_domains": [ 00:10:55.422 { 00:10:55.422 "dma_device_id": "system", 00:10:55.422 "dma_device_type": 1 00:10:55.422 }, 00:10:55.422 { 00:10:55.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.422 "dma_device_type": 2 00:10:55.422 }, 00:10:55.422 { 00:10:55.422 "dma_device_id": "system", 00:10:55.422 "dma_device_type": 1 00:10:55.422 }, 00:10:55.422 { 00:10:55.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.422 "dma_device_type": 2 00:10:55.422 }, 00:10:55.422 { 00:10:55.422 "dma_device_id": "system", 00:10:55.422 "dma_device_type": 1 00:10:55.422 }, 00:10:55.422 { 00:10:55.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.422 "dma_device_type": 2 00:10:55.422 }, 00:10:55.422 { 00:10:55.422 "dma_device_id": "system", 00:10:55.422 "dma_device_type": 1 00:10:55.422 }, 00:10:55.422 { 00:10:55.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.422 "dma_device_type": 2 00:10:55.422 } 00:10:55.422 ], 00:10:55.422 "driver_specific": { 00:10:55.422 "raid": { 00:10:55.422 "uuid": "9b60a0d5-58ea-47e2-b16b-8c5517bfd352", 00:10:55.422 "strip_size_kb": 0, 00:10:55.422 "state": "online", 00:10:55.422 "raid_level": "raid1", 00:10:55.422 "superblock": false, 00:10:55.422 "num_base_bdevs": 4, 00:10:55.422 "num_base_bdevs_discovered": 4, 00:10:55.422 "num_base_bdevs_operational": 4, 00:10:55.422 "base_bdevs_list": [ 00:10:55.422 { 00:10:55.422 "name": "NewBaseBdev", 00:10:55.422 "uuid": "fb068bd1-44f5-4d8a-88e4-5c7e22d31b68", 00:10:55.422 "is_configured": true, 00:10:55.422 "data_offset": 0, 00:10:55.422 "data_size": 65536 00:10:55.422 }, 00:10:55.422 { 00:10:55.422 "name": "BaseBdev2", 00:10:55.422 "uuid": "3655520e-b9db-420e-b364-cef94b5e793f", 00:10:55.422 "is_configured": true, 00:10:55.422 "data_offset": 0, 00:10:55.422 "data_size": 65536 00:10:55.422 }, 00:10:55.422 { 00:10:55.422 "name": "BaseBdev3", 00:10:55.422 "uuid": "ac5cee05-f5d0-4545-a99d-3faa46bd91af", 00:10:55.422 "is_configured": true, 00:10:55.422 "data_offset": 0, 00:10:55.422 "data_size": 65536 00:10:55.422 }, 00:10:55.422 { 00:10:55.422 "name": "BaseBdev4", 00:10:55.422 "uuid": "23d4f0aa-b0d9-4a1c-9d94-4bc4ddc19628", 00:10:55.422 "is_configured": true, 00:10:55.422 "data_offset": 0, 00:10:55.422 "data_size": 65536 00:10:55.422 } 00:10:55.422 ] 00:10:55.422 } 00:10:55.422 } 00:10:55.422 }' 00:10:55.422 04:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:55.682 04:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:55.682 BaseBdev2 00:10:55.682 BaseBdev3 00:10:55.682 BaseBdev4' 00:10:55.682 04:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.682 04:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:55.682 04:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.682 04:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.682 04:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:55.682 04:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.682 04:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.682 04:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.682 04:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.682 04:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.682 04:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.682 04:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:55.682 04:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.682 04:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.682 04:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.682 04:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.682 04:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.682 04:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.682 04:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.682 04:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:55.682 04:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.682 04:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.682 04:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.682 04:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.682 04:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.682 04:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.682 04:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.682 04:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:55.682 04:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.682 04:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.682 04:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.682 04:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.682 04:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.682 04:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.682 04:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:55.682 04:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.682 04:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.942 [2024-12-13 04:26:55.697905] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:55.942 [2024-12-13 04:26:55.697978] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:55.942 [2024-12-13 04:26:55.698069] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:55.942 [2024-12-13 04:26:55.698349] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:55.942 [2024-12-13 04:26:55.698366] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:10:55.942 04:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.942 04:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 85675 00:10:55.942 04:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 85675 ']' 00:10:55.942 04:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 85675 00:10:55.942 04:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:55.942 04:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:55.942 04:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85675 00:10:55.942 04:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:55.942 killing process with pid 85675 00:10:55.942 04:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:55.942 04:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85675' 00:10:55.942 04:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 85675 00:10:55.942 [2024-12-13 04:26:55.747824] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:55.942 04:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 85675 00:10:55.942 [2024-12-13 04:26:55.825295] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:56.202 04:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:56.202 00:10:56.202 real 0m9.596s 00:10:56.202 user 0m16.100s 00:10:56.202 sys 0m2.127s 00:10:56.202 04:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:56.202 ************************************ 00:10:56.202 END TEST raid_state_function_test 00:10:56.202 ************************************ 00:10:56.202 04:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.462 04:26:56 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:10:56.462 04:26:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:56.462 04:26:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:56.462 04:26:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:56.462 ************************************ 00:10:56.462 START TEST raid_state_function_test_sb 00:10:56.462 ************************************ 00:10:56.462 04:26:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:10:56.462 04:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:56.462 04:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:56.462 04:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:56.462 04:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:56.462 04:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:56.462 04:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:56.462 04:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:56.462 04:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:56.462 04:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:56.462 04:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:56.462 04:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:56.462 04:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:56.462 04:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:56.462 04:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:56.462 04:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:56.462 04:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:56.462 04:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:56.462 04:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:56.462 04:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:56.462 04:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:56.462 04:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:56.462 04:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:56.462 04:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:56.462 04:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:56.462 04:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:56.462 04:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:56.462 04:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:56.462 04:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:56.462 04:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=86324 00:10:56.462 04:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:56.462 04:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86324' 00:10:56.462 Process raid pid: 86324 00:10:56.462 04:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 86324 00:10:56.462 04:26:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 86324 ']' 00:10:56.462 04:26:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.462 04:26:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:56.462 04:26:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:56.462 04:26:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:56.462 04:26:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.462 [2024-12-13 04:26:56.329026] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:10:56.462 [2024-12-13 04:26:56.329203] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:56.462 [2024-12-13 04:26:56.465537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.722 [2024-12-13 04:26:56.504598] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.722 [2024-12-13 04:26:56.580682] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:56.722 [2024-12-13 04:26:56.580812] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:57.291 04:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:57.291 04:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:57.292 04:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:57.292 04:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.292 04:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.292 [2024-12-13 04:26:57.166418] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:57.292 [2024-12-13 04:26:57.166512] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:57.292 [2024-12-13 04:26:57.166523] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:57.292 [2024-12-13 04:26:57.166533] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:57.292 [2024-12-13 04:26:57.166540] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:57.292 [2024-12-13 04:26:57.166555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:57.292 [2024-12-13 04:26:57.166560] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:57.292 [2024-12-13 04:26:57.166569] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:57.292 04:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.292 04:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:57.292 04:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.292 04:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.292 04:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:57.292 04:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:57.292 04:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.292 04:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.292 04:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.292 04:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.292 04:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.292 04:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.292 04:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.292 04:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.292 04:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.292 04:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.292 04:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.292 "name": "Existed_Raid", 00:10:57.292 "uuid": "78404f84-cb1a-4b34-9fe2-d906e93c5476", 00:10:57.292 "strip_size_kb": 0, 00:10:57.292 "state": "configuring", 00:10:57.292 "raid_level": "raid1", 00:10:57.292 "superblock": true, 00:10:57.292 "num_base_bdevs": 4, 00:10:57.292 "num_base_bdevs_discovered": 0, 00:10:57.292 "num_base_bdevs_operational": 4, 00:10:57.292 "base_bdevs_list": [ 00:10:57.292 { 00:10:57.292 "name": "BaseBdev1", 00:10:57.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.292 "is_configured": false, 00:10:57.292 "data_offset": 0, 00:10:57.292 "data_size": 0 00:10:57.292 }, 00:10:57.292 { 00:10:57.292 "name": "BaseBdev2", 00:10:57.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.292 "is_configured": false, 00:10:57.292 "data_offset": 0, 00:10:57.292 "data_size": 0 00:10:57.292 }, 00:10:57.292 { 00:10:57.292 "name": "BaseBdev3", 00:10:57.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.292 "is_configured": false, 00:10:57.292 "data_offset": 0, 00:10:57.292 "data_size": 0 00:10:57.292 }, 00:10:57.292 { 00:10:57.292 "name": "BaseBdev4", 00:10:57.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.292 "is_configured": false, 00:10:57.292 "data_offset": 0, 00:10:57.292 "data_size": 0 00:10:57.292 } 00:10:57.292 ] 00:10:57.292 }' 00:10:57.292 04:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.292 04:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.861 04:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:57.861 04:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.861 04:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.861 [2024-12-13 04:26:57.585624] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:57.861 [2024-12-13 04:26:57.585730] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:10:57.861 04:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.861 04:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:57.861 04:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.861 04:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.861 [2024-12-13 04:26:57.597646] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:57.861 [2024-12-13 04:26:57.597725] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:57.861 [2024-12-13 04:26:57.597751] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:57.861 [2024-12-13 04:26:57.597774] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:57.861 [2024-12-13 04:26:57.597791] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:57.861 [2024-12-13 04:26:57.597811] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:57.861 [2024-12-13 04:26:57.597828] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:57.861 [2024-12-13 04:26:57.597864] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:57.861 04:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.861 04:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:57.861 04:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.861 04:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.861 [2024-12-13 04:26:57.624530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:57.861 BaseBdev1 00:10:57.862 04:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.862 04:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:57.862 04:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:57.862 04:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:57.862 04:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:57.862 04:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:57.862 04:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:57.862 04:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:57.862 04:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.862 04:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.862 04:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.862 04:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:57.862 04:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.862 04:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.862 [ 00:10:57.862 { 00:10:57.862 "name": "BaseBdev1", 00:10:57.862 "aliases": [ 00:10:57.862 "215266ec-f406-4a92-905d-6a00ccb1cca4" 00:10:57.862 ], 00:10:57.862 "product_name": "Malloc disk", 00:10:57.862 "block_size": 512, 00:10:57.862 "num_blocks": 65536, 00:10:57.862 "uuid": "215266ec-f406-4a92-905d-6a00ccb1cca4", 00:10:57.862 "assigned_rate_limits": { 00:10:57.862 "rw_ios_per_sec": 0, 00:10:57.862 "rw_mbytes_per_sec": 0, 00:10:57.862 "r_mbytes_per_sec": 0, 00:10:57.862 "w_mbytes_per_sec": 0 00:10:57.862 }, 00:10:57.862 "claimed": true, 00:10:57.862 "claim_type": "exclusive_write", 00:10:57.862 "zoned": false, 00:10:57.862 "supported_io_types": { 00:10:57.862 "read": true, 00:10:57.862 "write": true, 00:10:57.862 "unmap": true, 00:10:57.862 "flush": true, 00:10:57.862 "reset": true, 00:10:57.862 "nvme_admin": false, 00:10:57.862 "nvme_io": false, 00:10:57.862 "nvme_io_md": false, 00:10:57.862 "write_zeroes": true, 00:10:57.862 "zcopy": true, 00:10:57.862 "get_zone_info": false, 00:10:57.862 "zone_management": false, 00:10:57.862 "zone_append": false, 00:10:57.862 "compare": false, 00:10:57.862 "compare_and_write": false, 00:10:57.862 "abort": true, 00:10:57.862 "seek_hole": false, 00:10:57.862 "seek_data": false, 00:10:57.862 "copy": true, 00:10:57.862 "nvme_iov_md": false 00:10:57.862 }, 00:10:57.862 "memory_domains": [ 00:10:57.862 { 00:10:57.862 "dma_device_id": "system", 00:10:57.862 "dma_device_type": 1 00:10:57.862 }, 00:10:57.862 { 00:10:57.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.862 "dma_device_type": 2 00:10:57.862 } 00:10:57.862 ], 00:10:57.862 "driver_specific": {} 00:10:57.862 } 00:10:57.862 ] 00:10:57.862 04:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.862 04:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:57.862 04:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:57.862 04:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.862 04:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.862 04:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:57.862 04:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:57.862 04:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.862 04:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.862 04:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.862 04:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.862 04:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.862 04:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.862 04:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.862 04:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.862 04:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.862 04:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.862 04:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.862 "name": "Existed_Raid", 00:10:57.862 "uuid": "a28fd537-13ba-41ef-b84a-129a81109f99", 00:10:57.862 "strip_size_kb": 0, 00:10:57.862 "state": "configuring", 00:10:57.862 "raid_level": "raid1", 00:10:57.862 "superblock": true, 00:10:57.862 "num_base_bdevs": 4, 00:10:57.862 "num_base_bdevs_discovered": 1, 00:10:57.862 "num_base_bdevs_operational": 4, 00:10:57.862 "base_bdevs_list": [ 00:10:57.862 { 00:10:57.862 "name": "BaseBdev1", 00:10:57.862 "uuid": "215266ec-f406-4a92-905d-6a00ccb1cca4", 00:10:57.862 "is_configured": true, 00:10:57.862 "data_offset": 2048, 00:10:57.862 "data_size": 63488 00:10:57.862 }, 00:10:57.862 { 00:10:57.862 "name": "BaseBdev2", 00:10:57.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.862 "is_configured": false, 00:10:57.862 "data_offset": 0, 00:10:57.862 "data_size": 0 00:10:57.862 }, 00:10:57.862 { 00:10:57.862 "name": "BaseBdev3", 00:10:57.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.862 "is_configured": false, 00:10:57.862 "data_offset": 0, 00:10:57.862 "data_size": 0 00:10:57.862 }, 00:10:57.862 { 00:10:57.862 "name": "BaseBdev4", 00:10:57.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.862 "is_configured": false, 00:10:57.862 "data_offset": 0, 00:10:57.862 "data_size": 0 00:10:57.862 } 00:10:57.862 ] 00:10:57.862 }' 00:10:57.862 04:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.862 04:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.454 04:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:58.454 04:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.454 04:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.454 [2024-12-13 04:26:58.151630] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:58.454 [2024-12-13 04:26:58.151679] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:10:58.454 04:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.454 04:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:58.454 04:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.454 04:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.454 [2024-12-13 04:26:58.159645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:58.454 [2024-12-13 04:26:58.161873] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:58.454 [2024-12-13 04:26:58.161962] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:58.454 [2024-12-13 04:26:58.161989] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:58.454 [2024-12-13 04:26:58.162011] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:58.454 [2024-12-13 04:26:58.162028] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:58.454 [2024-12-13 04:26:58.162048] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:58.454 04:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.454 04:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:58.454 04:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:58.454 04:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:58.454 04:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.454 04:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.454 04:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:58.454 04:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:58.454 04:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.454 04:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.454 04:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.454 04:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.454 04:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.454 04:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.454 04:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.454 04:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.454 04:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.454 04:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.454 04:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.454 "name": "Existed_Raid", 00:10:58.454 "uuid": "361e935d-901a-404b-960d-95f7b6c60218", 00:10:58.454 "strip_size_kb": 0, 00:10:58.454 "state": "configuring", 00:10:58.454 "raid_level": "raid1", 00:10:58.454 "superblock": true, 00:10:58.454 "num_base_bdevs": 4, 00:10:58.454 "num_base_bdevs_discovered": 1, 00:10:58.454 "num_base_bdevs_operational": 4, 00:10:58.454 "base_bdevs_list": [ 00:10:58.454 { 00:10:58.454 "name": "BaseBdev1", 00:10:58.454 "uuid": "215266ec-f406-4a92-905d-6a00ccb1cca4", 00:10:58.454 "is_configured": true, 00:10:58.454 "data_offset": 2048, 00:10:58.454 "data_size": 63488 00:10:58.454 }, 00:10:58.454 { 00:10:58.454 "name": "BaseBdev2", 00:10:58.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.454 "is_configured": false, 00:10:58.454 "data_offset": 0, 00:10:58.454 "data_size": 0 00:10:58.454 }, 00:10:58.454 { 00:10:58.454 "name": "BaseBdev3", 00:10:58.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.454 "is_configured": false, 00:10:58.454 "data_offset": 0, 00:10:58.454 "data_size": 0 00:10:58.454 }, 00:10:58.454 { 00:10:58.454 "name": "BaseBdev4", 00:10:58.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.454 "is_configured": false, 00:10:58.454 "data_offset": 0, 00:10:58.454 "data_size": 0 00:10:58.454 } 00:10:58.454 ] 00:10:58.454 }' 00:10:58.454 04:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.454 04:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.715 04:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:58.715 04:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.715 04:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.715 [2024-12-13 04:26:58.643697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:58.715 BaseBdev2 00:10:58.715 04:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.715 04:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:58.715 04:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:58.715 04:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:58.715 04:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:58.715 04:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:58.715 04:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:58.715 04:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:58.715 04:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.715 04:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.715 04:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.715 04:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:58.715 04:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.715 04:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.715 [ 00:10:58.715 { 00:10:58.715 "name": "BaseBdev2", 00:10:58.715 "aliases": [ 00:10:58.715 "7f48b409-433f-4ca9-b8d4-d070256b5eb4" 00:10:58.715 ], 00:10:58.715 "product_name": "Malloc disk", 00:10:58.715 "block_size": 512, 00:10:58.715 "num_blocks": 65536, 00:10:58.715 "uuid": "7f48b409-433f-4ca9-b8d4-d070256b5eb4", 00:10:58.715 "assigned_rate_limits": { 00:10:58.715 "rw_ios_per_sec": 0, 00:10:58.715 "rw_mbytes_per_sec": 0, 00:10:58.715 "r_mbytes_per_sec": 0, 00:10:58.715 "w_mbytes_per_sec": 0 00:10:58.715 }, 00:10:58.715 "claimed": true, 00:10:58.715 "claim_type": "exclusive_write", 00:10:58.715 "zoned": false, 00:10:58.715 "supported_io_types": { 00:10:58.715 "read": true, 00:10:58.715 "write": true, 00:10:58.715 "unmap": true, 00:10:58.715 "flush": true, 00:10:58.715 "reset": true, 00:10:58.715 "nvme_admin": false, 00:10:58.715 "nvme_io": false, 00:10:58.715 "nvme_io_md": false, 00:10:58.715 "write_zeroes": true, 00:10:58.715 "zcopy": true, 00:10:58.715 "get_zone_info": false, 00:10:58.715 "zone_management": false, 00:10:58.715 "zone_append": false, 00:10:58.715 "compare": false, 00:10:58.715 "compare_and_write": false, 00:10:58.715 "abort": true, 00:10:58.715 "seek_hole": false, 00:10:58.715 "seek_data": false, 00:10:58.715 "copy": true, 00:10:58.715 "nvme_iov_md": false 00:10:58.715 }, 00:10:58.715 "memory_domains": [ 00:10:58.715 { 00:10:58.715 "dma_device_id": "system", 00:10:58.715 "dma_device_type": 1 00:10:58.715 }, 00:10:58.715 { 00:10:58.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.715 "dma_device_type": 2 00:10:58.715 } 00:10:58.715 ], 00:10:58.715 "driver_specific": {} 00:10:58.715 } 00:10:58.715 ] 00:10:58.715 04:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.715 04:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:58.715 04:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:58.715 04:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:58.715 04:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:58.715 04:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.715 04:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.715 04:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:58.715 04:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:58.715 04:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.715 04:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.715 04:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.715 04:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.715 04:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.715 04:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.715 04:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.715 04:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.715 04:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.715 04:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.715 04:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.715 "name": "Existed_Raid", 00:10:58.715 "uuid": "361e935d-901a-404b-960d-95f7b6c60218", 00:10:58.715 "strip_size_kb": 0, 00:10:58.715 "state": "configuring", 00:10:58.715 "raid_level": "raid1", 00:10:58.715 "superblock": true, 00:10:58.715 "num_base_bdevs": 4, 00:10:58.715 "num_base_bdevs_discovered": 2, 00:10:58.715 "num_base_bdevs_operational": 4, 00:10:58.715 "base_bdevs_list": [ 00:10:58.715 { 00:10:58.715 "name": "BaseBdev1", 00:10:58.715 "uuid": "215266ec-f406-4a92-905d-6a00ccb1cca4", 00:10:58.715 "is_configured": true, 00:10:58.715 "data_offset": 2048, 00:10:58.715 "data_size": 63488 00:10:58.715 }, 00:10:58.715 { 00:10:58.715 "name": "BaseBdev2", 00:10:58.715 "uuid": "7f48b409-433f-4ca9-b8d4-d070256b5eb4", 00:10:58.715 "is_configured": true, 00:10:58.715 "data_offset": 2048, 00:10:58.715 "data_size": 63488 00:10:58.715 }, 00:10:58.715 { 00:10:58.715 "name": "BaseBdev3", 00:10:58.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.715 "is_configured": false, 00:10:58.715 "data_offset": 0, 00:10:58.715 "data_size": 0 00:10:58.715 }, 00:10:58.715 { 00:10:58.715 "name": "BaseBdev4", 00:10:58.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.715 "is_configured": false, 00:10:58.715 "data_offset": 0, 00:10:58.715 "data_size": 0 00:10:58.715 } 00:10:58.715 ] 00:10:58.715 }' 00:10:58.715 04:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.715 04:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.285 04:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:59.285 04:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.285 04:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.285 [2024-12-13 04:26:59.122765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:59.285 BaseBdev3 00:10:59.285 04:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.285 04:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:59.285 04:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:59.285 04:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:59.285 04:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:59.285 04:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:59.285 04:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:59.285 04:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:59.285 04:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.285 04:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.285 04:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.285 04:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:59.285 04:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.285 04:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.285 [ 00:10:59.285 { 00:10:59.285 "name": "BaseBdev3", 00:10:59.285 "aliases": [ 00:10:59.285 "7dbdbdd3-5fb4-46de-9c5d-42028937a59d" 00:10:59.285 ], 00:10:59.285 "product_name": "Malloc disk", 00:10:59.285 "block_size": 512, 00:10:59.285 "num_blocks": 65536, 00:10:59.285 "uuid": "7dbdbdd3-5fb4-46de-9c5d-42028937a59d", 00:10:59.285 "assigned_rate_limits": { 00:10:59.285 "rw_ios_per_sec": 0, 00:10:59.285 "rw_mbytes_per_sec": 0, 00:10:59.285 "r_mbytes_per_sec": 0, 00:10:59.285 "w_mbytes_per_sec": 0 00:10:59.285 }, 00:10:59.285 "claimed": true, 00:10:59.285 "claim_type": "exclusive_write", 00:10:59.285 "zoned": false, 00:10:59.285 "supported_io_types": { 00:10:59.285 "read": true, 00:10:59.285 "write": true, 00:10:59.285 "unmap": true, 00:10:59.285 "flush": true, 00:10:59.285 "reset": true, 00:10:59.285 "nvme_admin": false, 00:10:59.285 "nvme_io": false, 00:10:59.285 "nvme_io_md": false, 00:10:59.285 "write_zeroes": true, 00:10:59.285 "zcopy": true, 00:10:59.285 "get_zone_info": false, 00:10:59.285 "zone_management": false, 00:10:59.285 "zone_append": false, 00:10:59.285 "compare": false, 00:10:59.285 "compare_and_write": false, 00:10:59.285 "abort": true, 00:10:59.285 "seek_hole": false, 00:10:59.285 "seek_data": false, 00:10:59.285 "copy": true, 00:10:59.285 "nvme_iov_md": false 00:10:59.285 }, 00:10:59.285 "memory_domains": [ 00:10:59.285 { 00:10:59.285 "dma_device_id": "system", 00:10:59.285 "dma_device_type": 1 00:10:59.285 }, 00:10:59.285 { 00:10:59.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.285 "dma_device_type": 2 00:10:59.285 } 00:10:59.285 ], 00:10:59.285 "driver_specific": {} 00:10:59.285 } 00:10:59.285 ] 00:10:59.285 04:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.285 04:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:59.285 04:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:59.285 04:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:59.285 04:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:59.285 04:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.285 04:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.285 04:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:59.285 04:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:59.285 04:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.285 04:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.285 04:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.285 04:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.285 04:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.285 04:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.285 04:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.285 04:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.285 04:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.285 04:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.285 04:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.285 "name": "Existed_Raid", 00:10:59.285 "uuid": "361e935d-901a-404b-960d-95f7b6c60218", 00:10:59.285 "strip_size_kb": 0, 00:10:59.285 "state": "configuring", 00:10:59.285 "raid_level": "raid1", 00:10:59.285 "superblock": true, 00:10:59.285 "num_base_bdevs": 4, 00:10:59.285 "num_base_bdevs_discovered": 3, 00:10:59.285 "num_base_bdevs_operational": 4, 00:10:59.285 "base_bdevs_list": [ 00:10:59.285 { 00:10:59.285 "name": "BaseBdev1", 00:10:59.285 "uuid": "215266ec-f406-4a92-905d-6a00ccb1cca4", 00:10:59.285 "is_configured": true, 00:10:59.285 "data_offset": 2048, 00:10:59.285 "data_size": 63488 00:10:59.285 }, 00:10:59.285 { 00:10:59.285 "name": "BaseBdev2", 00:10:59.285 "uuid": "7f48b409-433f-4ca9-b8d4-d070256b5eb4", 00:10:59.285 "is_configured": true, 00:10:59.285 "data_offset": 2048, 00:10:59.285 "data_size": 63488 00:10:59.285 }, 00:10:59.285 { 00:10:59.285 "name": "BaseBdev3", 00:10:59.286 "uuid": "7dbdbdd3-5fb4-46de-9c5d-42028937a59d", 00:10:59.286 "is_configured": true, 00:10:59.286 "data_offset": 2048, 00:10:59.286 "data_size": 63488 00:10:59.286 }, 00:10:59.286 { 00:10:59.286 "name": "BaseBdev4", 00:10:59.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.286 "is_configured": false, 00:10:59.286 "data_offset": 0, 00:10:59.286 "data_size": 0 00:10:59.286 } 00:10:59.286 ] 00:10:59.286 }' 00:10:59.286 04:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.286 04:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.855 04:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:59.855 04:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.855 04:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.855 BaseBdev4 00:10:59.855 [2024-12-13 04:26:59.594853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:59.855 [2024-12-13 04:26:59.595120] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:10:59.855 [2024-12-13 04:26:59.595145] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:59.855 [2024-12-13 04:26:59.595446] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:59.855 [2024-12-13 04:26:59.595649] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:10:59.855 [2024-12-13 04:26:59.595662] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:10:59.855 [2024-12-13 04:26:59.595859] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:59.855 04:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.855 04:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:59.855 04:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:59.855 04:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:59.855 04:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:59.855 04:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:59.855 04:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:59.856 04:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:59.856 04:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.856 04:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.856 04:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.856 04:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:59.856 04:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.856 04:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.856 [ 00:10:59.856 { 00:10:59.856 "name": "BaseBdev4", 00:10:59.856 "aliases": [ 00:10:59.856 "49b200e8-0045-48da-b130-03f785c3dbfa" 00:10:59.856 ], 00:10:59.856 "product_name": "Malloc disk", 00:10:59.856 "block_size": 512, 00:10:59.856 "num_blocks": 65536, 00:10:59.856 "uuid": "49b200e8-0045-48da-b130-03f785c3dbfa", 00:10:59.856 "assigned_rate_limits": { 00:10:59.856 "rw_ios_per_sec": 0, 00:10:59.856 "rw_mbytes_per_sec": 0, 00:10:59.856 "r_mbytes_per_sec": 0, 00:10:59.856 "w_mbytes_per_sec": 0 00:10:59.856 }, 00:10:59.856 "claimed": true, 00:10:59.856 "claim_type": "exclusive_write", 00:10:59.856 "zoned": false, 00:10:59.856 "supported_io_types": { 00:10:59.856 "read": true, 00:10:59.856 "write": true, 00:10:59.856 "unmap": true, 00:10:59.856 "flush": true, 00:10:59.856 "reset": true, 00:10:59.856 "nvme_admin": false, 00:10:59.856 "nvme_io": false, 00:10:59.856 "nvme_io_md": false, 00:10:59.856 "write_zeroes": true, 00:10:59.856 "zcopy": true, 00:10:59.856 "get_zone_info": false, 00:10:59.856 "zone_management": false, 00:10:59.856 "zone_append": false, 00:10:59.856 "compare": false, 00:10:59.856 "compare_and_write": false, 00:10:59.856 "abort": true, 00:10:59.856 "seek_hole": false, 00:10:59.856 "seek_data": false, 00:10:59.856 "copy": true, 00:10:59.856 "nvme_iov_md": false 00:10:59.856 }, 00:10:59.856 "memory_domains": [ 00:10:59.856 { 00:10:59.856 "dma_device_id": "system", 00:10:59.856 "dma_device_type": 1 00:10:59.856 }, 00:10:59.856 { 00:10:59.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.856 "dma_device_type": 2 00:10:59.856 } 00:10:59.856 ], 00:10:59.856 "driver_specific": {} 00:10:59.856 } 00:10:59.856 ] 00:10:59.856 04:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.856 04:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:59.856 04:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:59.856 04:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:59.856 04:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:59.856 04:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.856 04:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:59.856 04:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:59.856 04:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:59.856 04:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.856 04:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.856 04:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.856 04:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.856 04:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.856 04:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.856 04:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.856 04:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.856 04:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.856 04:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.856 04:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.856 "name": "Existed_Raid", 00:10:59.856 "uuid": "361e935d-901a-404b-960d-95f7b6c60218", 00:10:59.856 "strip_size_kb": 0, 00:10:59.856 "state": "online", 00:10:59.856 "raid_level": "raid1", 00:10:59.856 "superblock": true, 00:10:59.856 "num_base_bdevs": 4, 00:10:59.856 "num_base_bdevs_discovered": 4, 00:10:59.856 "num_base_bdevs_operational": 4, 00:10:59.856 "base_bdevs_list": [ 00:10:59.856 { 00:10:59.856 "name": "BaseBdev1", 00:10:59.856 "uuid": "215266ec-f406-4a92-905d-6a00ccb1cca4", 00:10:59.856 "is_configured": true, 00:10:59.856 "data_offset": 2048, 00:10:59.856 "data_size": 63488 00:10:59.856 }, 00:10:59.856 { 00:10:59.856 "name": "BaseBdev2", 00:10:59.856 "uuid": "7f48b409-433f-4ca9-b8d4-d070256b5eb4", 00:10:59.856 "is_configured": true, 00:10:59.856 "data_offset": 2048, 00:10:59.856 "data_size": 63488 00:10:59.856 }, 00:10:59.856 { 00:10:59.856 "name": "BaseBdev3", 00:10:59.856 "uuid": "7dbdbdd3-5fb4-46de-9c5d-42028937a59d", 00:10:59.856 "is_configured": true, 00:10:59.856 "data_offset": 2048, 00:10:59.856 "data_size": 63488 00:10:59.856 }, 00:10:59.856 { 00:10:59.856 "name": "BaseBdev4", 00:10:59.856 "uuid": "49b200e8-0045-48da-b130-03f785c3dbfa", 00:10:59.856 "is_configured": true, 00:10:59.856 "data_offset": 2048, 00:10:59.856 "data_size": 63488 00:10:59.856 } 00:10:59.856 ] 00:10:59.856 }' 00:10:59.856 04:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.856 04:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.116 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:00.116 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:00.116 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:00.116 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:00.116 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:00.116 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:00.116 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:00.116 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:00.116 04:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.116 04:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.116 [2024-12-13 04:27:00.046456] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:00.116 04:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.116 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:00.116 "name": "Existed_Raid", 00:11:00.116 "aliases": [ 00:11:00.116 "361e935d-901a-404b-960d-95f7b6c60218" 00:11:00.116 ], 00:11:00.116 "product_name": "Raid Volume", 00:11:00.116 "block_size": 512, 00:11:00.116 "num_blocks": 63488, 00:11:00.116 "uuid": "361e935d-901a-404b-960d-95f7b6c60218", 00:11:00.116 "assigned_rate_limits": { 00:11:00.116 "rw_ios_per_sec": 0, 00:11:00.116 "rw_mbytes_per_sec": 0, 00:11:00.116 "r_mbytes_per_sec": 0, 00:11:00.116 "w_mbytes_per_sec": 0 00:11:00.116 }, 00:11:00.116 "claimed": false, 00:11:00.116 "zoned": false, 00:11:00.116 "supported_io_types": { 00:11:00.116 "read": true, 00:11:00.116 "write": true, 00:11:00.116 "unmap": false, 00:11:00.116 "flush": false, 00:11:00.116 "reset": true, 00:11:00.116 "nvme_admin": false, 00:11:00.116 "nvme_io": false, 00:11:00.116 "nvme_io_md": false, 00:11:00.116 "write_zeroes": true, 00:11:00.116 "zcopy": false, 00:11:00.116 "get_zone_info": false, 00:11:00.116 "zone_management": false, 00:11:00.116 "zone_append": false, 00:11:00.116 "compare": false, 00:11:00.116 "compare_and_write": false, 00:11:00.116 "abort": false, 00:11:00.116 "seek_hole": false, 00:11:00.116 "seek_data": false, 00:11:00.116 "copy": false, 00:11:00.116 "nvme_iov_md": false 00:11:00.116 }, 00:11:00.116 "memory_domains": [ 00:11:00.116 { 00:11:00.116 "dma_device_id": "system", 00:11:00.116 "dma_device_type": 1 00:11:00.116 }, 00:11:00.116 { 00:11:00.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.116 "dma_device_type": 2 00:11:00.116 }, 00:11:00.116 { 00:11:00.116 "dma_device_id": "system", 00:11:00.116 "dma_device_type": 1 00:11:00.116 }, 00:11:00.116 { 00:11:00.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.116 "dma_device_type": 2 00:11:00.116 }, 00:11:00.116 { 00:11:00.116 "dma_device_id": "system", 00:11:00.116 "dma_device_type": 1 00:11:00.116 }, 00:11:00.116 { 00:11:00.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.116 "dma_device_type": 2 00:11:00.116 }, 00:11:00.116 { 00:11:00.116 "dma_device_id": "system", 00:11:00.116 "dma_device_type": 1 00:11:00.116 }, 00:11:00.116 { 00:11:00.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.116 "dma_device_type": 2 00:11:00.116 } 00:11:00.116 ], 00:11:00.116 "driver_specific": { 00:11:00.116 "raid": { 00:11:00.116 "uuid": "361e935d-901a-404b-960d-95f7b6c60218", 00:11:00.116 "strip_size_kb": 0, 00:11:00.116 "state": "online", 00:11:00.116 "raid_level": "raid1", 00:11:00.116 "superblock": true, 00:11:00.116 "num_base_bdevs": 4, 00:11:00.116 "num_base_bdevs_discovered": 4, 00:11:00.116 "num_base_bdevs_operational": 4, 00:11:00.116 "base_bdevs_list": [ 00:11:00.116 { 00:11:00.116 "name": "BaseBdev1", 00:11:00.116 "uuid": "215266ec-f406-4a92-905d-6a00ccb1cca4", 00:11:00.116 "is_configured": true, 00:11:00.116 "data_offset": 2048, 00:11:00.116 "data_size": 63488 00:11:00.116 }, 00:11:00.116 { 00:11:00.116 "name": "BaseBdev2", 00:11:00.116 "uuid": "7f48b409-433f-4ca9-b8d4-d070256b5eb4", 00:11:00.116 "is_configured": true, 00:11:00.116 "data_offset": 2048, 00:11:00.116 "data_size": 63488 00:11:00.116 }, 00:11:00.116 { 00:11:00.116 "name": "BaseBdev3", 00:11:00.116 "uuid": "7dbdbdd3-5fb4-46de-9c5d-42028937a59d", 00:11:00.116 "is_configured": true, 00:11:00.116 "data_offset": 2048, 00:11:00.116 "data_size": 63488 00:11:00.116 }, 00:11:00.116 { 00:11:00.116 "name": "BaseBdev4", 00:11:00.116 "uuid": "49b200e8-0045-48da-b130-03f785c3dbfa", 00:11:00.116 "is_configured": true, 00:11:00.116 "data_offset": 2048, 00:11:00.116 "data_size": 63488 00:11:00.116 } 00:11:00.116 ] 00:11:00.116 } 00:11:00.116 } 00:11:00.116 }' 00:11:00.116 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:00.116 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:00.116 BaseBdev2 00:11:00.116 BaseBdev3 00:11:00.116 BaseBdev4' 00:11:00.116 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.376 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:00.376 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.376 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:00.376 04:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.376 04:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.376 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.376 04:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.376 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.376 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.376 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.376 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:00.376 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.376 04:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.376 04:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.376 04:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.376 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.376 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.376 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.376 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:00.376 04:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.376 04:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.376 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.376 04:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.376 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.376 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.376 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.376 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:00.376 04:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.376 04:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.376 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.376 04:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.376 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.376 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.376 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:00.376 04:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.376 04:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.376 [2024-12-13 04:27:00.369611] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:00.635 04:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.635 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:00.635 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:00.635 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:00.635 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:00.635 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:00.635 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:00.635 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.635 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:00.635 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:00.635 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:00.635 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:00.635 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.635 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.635 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.635 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.635 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.635 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.635 04:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.635 04:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.635 04:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.635 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.635 "name": "Existed_Raid", 00:11:00.635 "uuid": "361e935d-901a-404b-960d-95f7b6c60218", 00:11:00.635 "strip_size_kb": 0, 00:11:00.635 "state": "online", 00:11:00.635 "raid_level": "raid1", 00:11:00.635 "superblock": true, 00:11:00.635 "num_base_bdevs": 4, 00:11:00.635 "num_base_bdevs_discovered": 3, 00:11:00.635 "num_base_bdevs_operational": 3, 00:11:00.635 "base_bdevs_list": [ 00:11:00.635 { 00:11:00.635 "name": null, 00:11:00.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.635 "is_configured": false, 00:11:00.635 "data_offset": 0, 00:11:00.635 "data_size": 63488 00:11:00.635 }, 00:11:00.635 { 00:11:00.635 "name": "BaseBdev2", 00:11:00.635 "uuid": "7f48b409-433f-4ca9-b8d4-d070256b5eb4", 00:11:00.635 "is_configured": true, 00:11:00.635 "data_offset": 2048, 00:11:00.635 "data_size": 63488 00:11:00.635 }, 00:11:00.635 { 00:11:00.635 "name": "BaseBdev3", 00:11:00.635 "uuid": "7dbdbdd3-5fb4-46de-9c5d-42028937a59d", 00:11:00.635 "is_configured": true, 00:11:00.635 "data_offset": 2048, 00:11:00.635 "data_size": 63488 00:11:00.635 }, 00:11:00.635 { 00:11:00.635 "name": "BaseBdev4", 00:11:00.635 "uuid": "49b200e8-0045-48da-b130-03f785c3dbfa", 00:11:00.635 "is_configured": true, 00:11:00.635 "data_offset": 2048, 00:11:00.635 "data_size": 63488 00:11:00.635 } 00:11:00.635 ] 00:11:00.635 }' 00:11:00.635 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.635 04:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.895 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:00.895 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:00.895 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.895 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:00.895 04:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.895 04:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.895 04:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.895 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:00.895 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:00.895 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:00.895 04:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.895 04:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.895 [2024-12-13 04:27:00.893322] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:01.155 04:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.155 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:01.155 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:01.155 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.155 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:01.155 04:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.155 04:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.155 04:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.155 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:01.155 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:01.155 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:01.155 04:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.155 04:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.155 [2024-12-13 04:27:00.965793] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:01.155 04:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.155 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:01.155 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:01.155 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.155 04:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:01.155 04:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.155 04:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.155 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.155 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:01.155 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:01.155 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:01.155 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.155 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.155 [2024-12-13 04:27:01.046334] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:01.155 [2024-12-13 04:27:01.046558] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:01.155 [2024-12-13 04:27:01.066991] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:01.155 [2024-12-13 04:27:01.067045] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:01.155 [2024-12-13 04:27:01.067059] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:11:01.155 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.155 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:01.155 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:01.155 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.155 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.155 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.155 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:01.155 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.155 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:01.155 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:01.155 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:01.155 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:01.155 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:01.155 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:01.155 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.155 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.155 BaseBdev2 00:11:01.155 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.155 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:01.155 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:01.155 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:01.155 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:01.155 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:01.155 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:01.155 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:01.155 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.155 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.155 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.155 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:01.155 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.155 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.155 [ 00:11:01.155 { 00:11:01.155 "name": "BaseBdev2", 00:11:01.155 "aliases": [ 00:11:01.155 "04bb9be0-3fc8-4885-ab3e-801520d8e9f4" 00:11:01.155 ], 00:11:01.155 "product_name": "Malloc disk", 00:11:01.155 "block_size": 512, 00:11:01.155 "num_blocks": 65536, 00:11:01.155 "uuid": "04bb9be0-3fc8-4885-ab3e-801520d8e9f4", 00:11:01.155 "assigned_rate_limits": { 00:11:01.155 "rw_ios_per_sec": 0, 00:11:01.155 "rw_mbytes_per_sec": 0, 00:11:01.155 "r_mbytes_per_sec": 0, 00:11:01.155 "w_mbytes_per_sec": 0 00:11:01.155 }, 00:11:01.155 "claimed": false, 00:11:01.155 "zoned": false, 00:11:01.155 "supported_io_types": { 00:11:01.155 "read": true, 00:11:01.155 "write": true, 00:11:01.155 "unmap": true, 00:11:01.155 "flush": true, 00:11:01.155 "reset": true, 00:11:01.155 "nvme_admin": false, 00:11:01.155 "nvme_io": false, 00:11:01.155 "nvme_io_md": false, 00:11:01.155 "write_zeroes": true, 00:11:01.155 "zcopy": true, 00:11:01.155 "get_zone_info": false, 00:11:01.155 "zone_management": false, 00:11:01.155 "zone_append": false, 00:11:01.155 "compare": false, 00:11:01.155 "compare_and_write": false, 00:11:01.155 "abort": true, 00:11:01.155 "seek_hole": false, 00:11:01.155 "seek_data": false, 00:11:01.155 "copy": true, 00:11:01.155 "nvme_iov_md": false 00:11:01.155 }, 00:11:01.155 "memory_domains": [ 00:11:01.155 { 00:11:01.155 "dma_device_id": "system", 00:11:01.155 "dma_device_type": 1 00:11:01.415 }, 00:11:01.415 { 00:11:01.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.415 "dma_device_type": 2 00:11:01.415 } 00:11:01.415 ], 00:11:01.415 "driver_specific": {} 00:11:01.415 } 00:11:01.415 ] 00:11:01.415 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.415 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:01.415 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:01.415 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:01.415 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:01.415 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.415 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.415 BaseBdev3 00:11:01.415 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.415 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:01.415 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:01.415 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:01.415 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:01.415 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:01.415 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:01.415 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:01.415 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.415 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.415 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.415 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:01.415 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.415 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.415 [ 00:11:01.415 { 00:11:01.415 "name": "BaseBdev3", 00:11:01.415 "aliases": [ 00:11:01.415 "959fed49-756e-4938-941c-1a5ac6bb2a34" 00:11:01.415 ], 00:11:01.415 "product_name": "Malloc disk", 00:11:01.415 "block_size": 512, 00:11:01.415 "num_blocks": 65536, 00:11:01.415 "uuid": "959fed49-756e-4938-941c-1a5ac6bb2a34", 00:11:01.415 "assigned_rate_limits": { 00:11:01.415 "rw_ios_per_sec": 0, 00:11:01.415 "rw_mbytes_per_sec": 0, 00:11:01.415 "r_mbytes_per_sec": 0, 00:11:01.415 "w_mbytes_per_sec": 0 00:11:01.415 }, 00:11:01.415 "claimed": false, 00:11:01.415 "zoned": false, 00:11:01.415 "supported_io_types": { 00:11:01.415 "read": true, 00:11:01.415 "write": true, 00:11:01.415 "unmap": true, 00:11:01.415 "flush": true, 00:11:01.415 "reset": true, 00:11:01.415 "nvme_admin": false, 00:11:01.415 "nvme_io": false, 00:11:01.415 "nvme_io_md": false, 00:11:01.415 "write_zeroes": true, 00:11:01.415 "zcopy": true, 00:11:01.415 "get_zone_info": false, 00:11:01.415 "zone_management": false, 00:11:01.415 "zone_append": false, 00:11:01.415 "compare": false, 00:11:01.415 "compare_and_write": false, 00:11:01.415 "abort": true, 00:11:01.415 "seek_hole": false, 00:11:01.415 "seek_data": false, 00:11:01.415 "copy": true, 00:11:01.415 "nvme_iov_md": false 00:11:01.415 }, 00:11:01.415 "memory_domains": [ 00:11:01.415 { 00:11:01.415 "dma_device_id": "system", 00:11:01.415 "dma_device_type": 1 00:11:01.415 }, 00:11:01.415 { 00:11:01.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.415 "dma_device_type": 2 00:11:01.415 } 00:11:01.415 ], 00:11:01.415 "driver_specific": {} 00:11:01.415 } 00:11:01.415 ] 00:11:01.415 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.415 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:01.415 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:01.415 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:01.415 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:01.415 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.415 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.415 BaseBdev4 00:11:01.415 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.415 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:01.415 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:01.415 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:01.415 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:01.415 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:01.415 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:01.415 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:01.415 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.415 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.415 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.415 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:01.415 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.415 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.415 [ 00:11:01.415 { 00:11:01.415 "name": "BaseBdev4", 00:11:01.415 "aliases": [ 00:11:01.415 "fd963a25-eaaf-46ab-8e37-fd4909f8acf7" 00:11:01.415 ], 00:11:01.415 "product_name": "Malloc disk", 00:11:01.415 "block_size": 512, 00:11:01.415 "num_blocks": 65536, 00:11:01.415 "uuid": "fd963a25-eaaf-46ab-8e37-fd4909f8acf7", 00:11:01.415 "assigned_rate_limits": { 00:11:01.415 "rw_ios_per_sec": 0, 00:11:01.415 "rw_mbytes_per_sec": 0, 00:11:01.415 "r_mbytes_per_sec": 0, 00:11:01.415 "w_mbytes_per_sec": 0 00:11:01.415 }, 00:11:01.415 "claimed": false, 00:11:01.415 "zoned": false, 00:11:01.415 "supported_io_types": { 00:11:01.415 "read": true, 00:11:01.415 "write": true, 00:11:01.415 "unmap": true, 00:11:01.415 "flush": true, 00:11:01.415 "reset": true, 00:11:01.415 "nvme_admin": false, 00:11:01.415 "nvme_io": false, 00:11:01.416 "nvme_io_md": false, 00:11:01.416 "write_zeroes": true, 00:11:01.416 "zcopy": true, 00:11:01.416 "get_zone_info": false, 00:11:01.416 "zone_management": false, 00:11:01.416 "zone_append": false, 00:11:01.416 "compare": false, 00:11:01.416 "compare_and_write": false, 00:11:01.416 "abort": true, 00:11:01.416 "seek_hole": false, 00:11:01.416 "seek_data": false, 00:11:01.416 "copy": true, 00:11:01.416 "nvme_iov_md": false 00:11:01.416 }, 00:11:01.416 "memory_domains": [ 00:11:01.416 { 00:11:01.416 "dma_device_id": "system", 00:11:01.416 "dma_device_type": 1 00:11:01.416 }, 00:11:01.416 { 00:11:01.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.416 "dma_device_type": 2 00:11:01.416 } 00:11:01.416 ], 00:11:01.416 "driver_specific": {} 00:11:01.416 } 00:11:01.416 ] 00:11:01.416 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.416 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:01.416 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:01.416 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:01.416 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:01.416 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.416 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.416 [2024-12-13 04:27:01.292926] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:01.416 [2024-12-13 04:27:01.293048] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:01.416 [2024-12-13 04:27:01.293088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:01.416 [2024-12-13 04:27:01.295212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:01.416 [2024-12-13 04:27:01.295307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:01.416 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.416 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:01.416 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.416 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.416 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:01.416 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:01.416 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.416 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.416 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.416 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.416 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.416 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.416 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.416 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.416 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.416 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.416 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.416 "name": "Existed_Raid", 00:11:01.416 "uuid": "6500c547-efda-4a61-b88f-7a41ab0a1bcb", 00:11:01.416 "strip_size_kb": 0, 00:11:01.416 "state": "configuring", 00:11:01.416 "raid_level": "raid1", 00:11:01.416 "superblock": true, 00:11:01.416 "num_base_bdevs": 4, 00:11:01.416 "num_base_bdevs_discovered": 3, 00:11:01.416 "num_base_bdevs_operational": 4, 00:11:01.416 "base_bdevs_list": [ 00:11:01.416 { 00:11:01.416 "name": "BaseBdev1", 00:11:01.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.416 "is_configured": false, 00:11:01.416 "data_offset": 0, 00:11:01.416 "data_size": 0 00:11:01.416 }, 00:11:01.416 { 00:11:01.416 "name": "BaseBdev2", 00:11:01.416 "uuid": "04bb9be0-3fc8-4885-ab3e-801520d8e9f4", 00:11:01.416 "is_configured": true, 00:11:01.416 "data_offset": 2048, 00:11:01.416 "data_size": 63488 00:11:01.416 }, 00:11:01.416 { 00:11:01.416 "name": "BaseBdev3", 00:11:01.416 "uuid": "959fed49-756e-4938-941c-1a5ac6bb2a34", 00:11:01.416 "is_configured": true, 00:11:01.416 "data_offset": 2048, 00:11:01.416 "data_size": 63488 00:11:01.416 }, 00:11:01.416 { 00:11:01.416 "name": "BaseBdev4", 00:11:01.416 "uuid": "fd963a25-eaaf-46ab-8e37-fd4909f8acf7", 00:11:01.416 "is_configured": true, 00:11:01.416 "data_offset": 2048, 00:11:01.416 "data_size": 63488 00:11:01.416 } 00:11:01.416 ] 00:11:01.416 }' 00:11:01.416 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.416 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.984 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:01.984 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.984 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.984 [2024-12-13 04:27:01.784166] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:01.984 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.984 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:01.984 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.984 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.984 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:01.984 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:01.984 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.984 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.984 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.984 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.984 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.984 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.984 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.984 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.984 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.984 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.984 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.984 "name": "Existed_Raid", 00:11:01.984 "uuid": "6500c547-efda-4a61-b88f-7a41ab0a1bcb", 00:11:01.984 "strip_size_kb": 0, 00:11:01.984 "state": "configuring", 00:11:01.984 "raid_level": "raid1", 00:11:01.984 "superblock": true, 00:11:01.984 "num_base_bdevs": 4, 00:11:01.984 "num_base_bdevs_discovered": 2, 00:11:01.984 "num_base_bdevs_operational": 4, 00:11:01.984 "base_bdevs_list": [ 00:11:01.984 { 00:11:01.984 "name": "BaseBdev1", 00:11:01.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.984 "is_configured": false, 00:11:01.984 "data_offset": 0, 00:11:01.984 "data_size": 0 00:11:01.984 }, 00:11:01.984 { 00:11:01.984 "name": null, 00:11:01.984 "uuid": "04bb9be0-3fc8-4885-ab3e-801520d8e9f4", 00:11:01.984 "is_configured": false, 00:11:01.984 "data_offset": 0, 00:11:01.984 "data_size": 63488 00:11:01.984 }, 00:11:01.984 { 00:11:01.984 "name": "BaseBdev3", 00:11:01.984 "uuid": "959fed49-756e-4938-941c-1a5ac6bb2a34", 00:11:01.984 "is_configured": true, 00:11:01.984 "data_offset": 2048, 00:11:01.984 "data_size": 63488 00:11:01.984 }, 00:11:01.984 { 00:11:01.984 "name": "BaseBdev4", 00:11:01.984 "uuid": "fd963a25-eaaf-46ab-8e37-fd4909f8acf7", 00:11:01.984 "is_configured": true, 00:11:01.984 "data_offset": 2048, 00:11:01.984 "data_size": 63488 00:11:01.984 } 00:11:01.984 ] 00:11:01.984 }' 00:11:01.984 04:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.984 04:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.243 04:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.243 04:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:02.243 04:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.243 04:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.243 04:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.243 04:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:02.243 04:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:02.243 04:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.243 04:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.503 [2024-12-13 04:27:02.264228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:02.503 BaseBdev1 00:11:02.503 04:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.503 04:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:02.503 04:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:02.503 04:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:02.503 04:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:02.503 04:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:02.503 04:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:02.503 04:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:02.503 04:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.503 04:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.503 04:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.503 04:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:02.503 04:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.503 04:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.503 [ 00:11:02.503 { 00:11:02.503 "name": "BaseBdev1", 00:11:02.503 "aliases": [ 00:11:02.503 "260d5a6f-7052-4267-82f8-5aaffb8bba03" 00:11:02.503 ], 00:11:02.503 "product_name": "Malloc disk", 00:11:02.504 "block_size": 512, 00:11:02.504 "num_blocks": 65536, 00:11:02.504 "uuid": "260d5a6f-7052-4267-82f8-5aaffb8bba03", 00:11:02.504 "assigned_rate_limits": { 00:11:02.504 "rw_ios_per_sec": 0, 00:11:02.504 "rw_mbytes_per_sec": 0, 00:11:02.504 "r_mbytes_per_sec": 0, 00:11:02.504 "w_mbytes_per_sec": 0 00:11:02.504 }, 00:11:02.504 "claimed": true, 00:11:02.504 "claim_type": "exclusive_write", 00:11:02.504 "zoned": false, 00:11:02.504 "supported_io_types": { 00:11:02.504 "read": true, 00:11:02.504 "write": true, 00:11:02.504 "unmap": true, 00:11:02.504 "flush": true, 00:11:02.504 "reset": true, 00:11:02.504 "nvme_admin": false, 00:11:02.504 "nvme_io": false, 00:11:02.504 "nvme_io_md": false, 00:11:02.504 "write_zeroes": true, 00:11:02.504 "zcopy": true, 00:11:02.504 "get_zone_info": false, 00:11:02.504 "zone_management": false, 00:11:02.504 "zone_append": false, 00:11:02.504 "compare": false, 00:11:02.504 "compare_and_write": false, 00:11:02.504 "abort": true, 00:11:02.504 "seek_hole": false, 00:11:02.504 "seek_data": false, 00:11:02.504 "copy": true, 00:11:02.504 "nvme_iov_md": false 00:11:02.504 }, 00:11:02.504 "memory_domains": [ 00:11:02.504 { 00:11:02.504 "dma_device_id": "system", 00:11:02.504 "dma_device_type": 1 00:11:02.504 }, 00:11:02.504 { 00:11:02.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.504 "dma_device_type": 2 00:11:02.504 } 00:11:02.504 ], 00:11:02.504 "driver_specific": {} 00:11:02.504 } 00:11:02.504 ] 00:11:02.504 04:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.504 04:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:02.504 04:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:02.504 04:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.504 04:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.504 04:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:02.504 04:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:02.504 04:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.504 04:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.504 04:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.504 04:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.504 04:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.504 04:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.504 04:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.504 04:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.504 04:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.504 04:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.504 04:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.504 "name": "Existed_Raid", 00:11:02.504 "uuid": "6500c547-efda-4a61-b88f-7a41ab0a1bcb", 00:11:02.504 "strip_size_kb": 0, 00:11:02.504 "state": "configuring", 00:11:02.504 "raid_level": "raid1", 00:11:02.504 "superblock": true, 00:11:02.504 "num_base_bdevs": 4, 00:11:02.504 "num_base_bdevs_discovered": 3, 00:11:02.504 "num_base_bdevs_operational": 4, 00:11:02.504 "base_bdevs_list": [ 00:11:02.504 { 00:11:02.504 "name": "BaseBdev1", 00:11:02.504 "uuid": "260d5a6f-7052-4267-82f8-5aaffb8bba03", 00:11:02.504 "is_configured": true, 00:11:02.504 "data_offset": 2048, 00:11:02.504 "data_size": 63488 00:11:02.504 }, 00:11:02.504 { 00:11:02.504 "name": null, 00:11:02.504 "uuid": "04bb9be0-3fc8-4885-ab3e-801520d8e9f4", 00:11:02.504 "is_configured": false, 00:11:02.504 "data_offset": 0, 00:11:02.504 "data_size": 63488 00:11:02.504 }, 00:11:02.504 { 00:11:02.504 "name": "BaseBdev3", 00:11:02.504 "uuid": "959fed49-756e-4938-941c-1a5ac6bb2a34", 00:11:02.504 "is_configured": true, 00:11:02.504 "data_offset": 2048, 00:11:02.504 "data_size": 63488 00:11:02.504 }, 00:11:02.504 { 00:11:02.504 "name": "BaseBdev4", 00:11:02.504 "uuid": "fd963a25-eaaf-46ab-8e37-fd4909f8acf7", 00:11:02.504 "is_configured": true, 00:11:02.504 "data_offset": 2048, 00:11:02.504 "data_size": 63488 00:11:02.504 } 00:11:02.504 ] 00:11:02.504 }' 00:11:02.504 04:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.504 04:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.764 04:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.764 04:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.764 04:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.764 04:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:02.764 04:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.023 04:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:03.023 04:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:03.023 04:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.023 04:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.023 [2024-12-13 04:27:02.807362] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:03.023 04:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.024 04:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:03.024 04:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.024 04:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.024 04:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:03.024 04:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:03.024 04:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.024 04:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.024 04:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.024 04:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.024 04:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.024 04:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.024 04:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.024 04:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.024 04:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.024 04:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.024 04:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.024 "name": "Existed_Raid", 00:11:03.024 "uuid": "6500c547-efda-4a61-b88f-7a41ab0a1bcb", 00:11:03.024 "strip_size_kb": 0, 00:11:03.024 "state": "configuring", 00:11:03.024 "raid_level": "raid1", 00:11:03.024 "superblock": true, 00:11:03.024 "num_base_bdevs": 4, 00:11:03.024 "num_base_bdevs_discovered": 2, 00:11:03.024 "num_base_bdevs_operational": 4, 00:11:03.024 "base_bdevs_list": [ 00:11:03.024 { 00:11:03.024 "name": "BaseBdev1", 00:11:03.024 "uuid": "260d5a6f-7052-4267-82f8-5aaffb8bba03", 00:11:03.024 "is_configured": true, 00:11:03.024 "data_offset": 2048, 00:11:03.024 "data_size": 63488 00:11:03.024 }, 00:11:03.024 { 00:11:03.024 "name": null, 00:11:03.024 "uuid": "04bb9be0-3fc8-4885-ab3e-801520d8e9f4", 00:11:03.024 "is_configured": false, 00:11:03.024 "data_offset": 0, 00:11:03.024 "data_size": 63488 00:11:03.024 }, 00:11:03.024 { 00:11:03.024 "name": null, 00:11:03.024 "uuid": "959fed49-756e-4938-941c-1a5ac6bb2a34", 00:11:03.024 "is_configured": false, 00:11:03.024 "data_offset": 0, 00:11:03.024 "data_size": 63488 00:11:03.024 }, 00:11:03.024 { 00:11:03.024 "name": "BaseBdev4", 00:11:03.024 "uuid": "fd963a25-eaaf-46ab-8e37-fd4909f8acf7", 00:11:03.024 "is_configured": true, 00:11:03.024 "data_offset": 2048, 00:11:03.024 "data_size": 63488 00:11:03.024 } 00:11:03.024 ] 00:11:03.024 }' 00:11:03.024 04:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.024 04:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.283 04:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.283 04:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.283 04:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.283 04:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:03.283 04:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.283 04:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:03.283 04:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:03.283 04:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.283 04:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.283 [2024-12-13 04:27:03.282573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:03.283 04:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.283 04:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:03.283 04:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.283 04:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.283 04:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:03.283 04:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:03.283 04:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.283 04:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.283 04:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.283 04:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.283 04:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.283 04:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.283 04:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.283 04:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.283 04:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.543 04:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.543 04:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.543 "name": "Existed_Raid", 00:11:03.543 "uuid": "6500c547-efda-4a61-b88f-7a41ab0a1bcb", 00:11:03.543 "strip_size_kb": 0, 00:11:03.543 "state": "configuring", 00:11:03.543 "raid_level": "raid1", 00:11:03.543 "superblock": true, 00:11:03.543 "num_base_bdevs": 4, 00:11:03.543 "num_base_bdevs_discovered": 3, 00:11:03.543 "num_base_bdevs_operational": 4, 00:11:03.543 "base_bdevs_list": [ 00:11:03.543 { 00:11:03.543 "name": "BaseBdev1", 00:11:03.543 "uuid": "260d5a6f-7052-4267-82f8-5aaffb8bba03", 00:11:03.543 "is_configured": true, 00:11:03.543 "data_offset": 2048, 00:11:03.543 "data_size": 63488 00:11:03.543 }, 00:11:03.543 { 00:11:03.543 "name": null, 00:11:03.543 "uuid": "04bb9be0-3fc8-4885-ab3e-801520d8e9f4", 00:11:03.543 "is_configured": false, 00:11:03.543 "data_offset": 0, 00:11:03.543 "data_size": 63488 00:11:03.543 }, 00:11:03.543 { 00:11:03.543 "name": "BaseBdev3", 00:11:03.543 "uuid": "959fed49-756e-4938-941c-1a5ac6bb2a34", 00:11:03.543 "is_configured": true, 00:11:03.543 "data_offset": 2048, 00:11:03.543 "data_size": 63488 00:11:03.543 }, 00:11:03.543 { 00:11:03.543 "name": "BaseBdev4", 00:11:03.543 "uuid": "fd963a25-eaaf-46ab-8e37-fd4909f8acf7", 00:11:03.543 "is_configured": true, 00:11:03.543 "data_offset": 2048, 00:11:03.543 "data_size": 63488 00:11:03.543 } 00:11:03.543 ] 00:11:03.543 }' 00:11:03.543 04:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.543 04:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.803 04:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.803 04:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:03.803 04:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.803 04:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.803 04:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.803 04:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:03.803 04:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:03.803 04:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.803 04:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.803 [2024-12-13 04:27:03.757807] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:03.803 04:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.803 04:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:03.803 04:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.803 04:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.803 04:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:03.803 04:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:03.803 04:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.803 04:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.803 04:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.803 04:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.803 04:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.803 04:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.803 04:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.803 04:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.803 04:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.803 04:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.062 04:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.062 "name": "Existed_Raid", 00:11:04.062 "uuid": "6500c547-efda-4a61-b88f-7a41ab0a1bcb", 00:11:04.062 "strip_size_kb": 0, 00:11:04.062 "state": "configuring", 00:11:04.062 "raid_level": "raid1", 00:11:04.062 "superblock": true, 00:11:04.062 "num_base_bdevs": 4, 00:11:04.063 "num_base_bdevs_discovered": 2, 00:11:04.063 "num_base_bdevs_operational": 4, 00:11:04.063 "base_bdevs_list": [ 00:11:04.063 { 00:11:04.063 "name": null, 00:11:04.063 "uuid": "260d5a6f-7052-4267-82f8-5aaffb8bba03", 00:11:04.063 "is_configured": false, 00:11:04.063 "data_offset": 0, 00:11:04.063 "data_size": 63488 00:11:04.063 }, 00:11:04.063 { 00:11:04.063 "name": null, 00:11:04.063 "uuid": "04bb9be0-3fc8-4885-ab3e-801520d8e9f4", 00:11:04.063 "is_configured": false, 00:11:04.063 "data_offset": 0, 00:11:04.063 "data_size": 63488 00:11:04.063 }, 00:11:04.063 { 00:11:04.063 "name": "BaseBdev3", 00:11:04.063 "uuid": "959fed49-756e-4938-941c-1a5ac6bb2a34", 00:11:04.063 "is_configured": true, 00:11:04.063 "data_offset": 2048, 00:11:04.063 "data_size": 63488 00:11:04.063 }, 00:11:04.063 { 00:11:04.063 "name": "BaseBdev4", 00:11:04.063 "uuid": "fd963a25-eaaf-46ab-8e37-fd4909f8acf7", 00:11:04.063 "is_configured": true, 00:11:04.063 "data_offset": 2048, 00:11:04.063 "data_size": 63488 00:11:04.063 } 00:11:04.063 ] 00:11:04.063 }' 00:11:04.063 04:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.063 04:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.322 04:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:04.322 04:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.322 04:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.322 04:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.322 04:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.322 04:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:04.322 04:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:04.322 04:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.322 04:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.322 [2024-12-13 04:27:04.249024] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:04.322 04:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.322 04:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:04.322 04:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.322 04:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.322 04:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:04.322 04:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:04.322 04:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.322 04:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.322 04:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.322 04:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.322 04:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.322 04:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.322 04:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.322 04:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.322 04:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.322 04:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.322 04:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.322 "name": "Existed_Raid", 00:11:04.322 "uuid": "6500c547-efda-4a61-b88f-7a41ab0a1bcb", 00:11:04.322 "strip_size_kb": 0, 00:11:04.322 "state": "configuring", 00:11:04.322 "raid_level": "raid1", 00:11:04.322 "superblock": true, 00:11:04.322 "num_base_bdevs": 4, 00:11:04.322 "num_base_bdevs_discovered": 3, 00:11:04.322 "num_base_bdevs_operational": 4, 00:11:04.322 "base_bdevs_list": [ 00:11:04.322 { 00:11:04.322 "name": null, 00:11:04.322 "uuid": "260d5a6f-7052-4267-82f8-5aaffb8bba03", 00:11:04.322 "is_configured": false, 00:11:04.322 "data_offset": 0, 00:11:04.322 "data_size": 63488 00:11:04.322 }, 00:11:04.322 { 00:11:04.322 "name": "BaseBdev2", 00:11:04.322 "uuid": "04bb9be0-3fc8-4885-ab3e-801520d8e9f4", 00:11:04.322 "is_configured": true, 00:11:04.322 "data_offset": 2048, 00:11:04.322 "data_size": 63488 00:11:04.322 }, 00:11:04.322 { 00:11:04.322 "name": "BaseBdev3", 00:11:04.322 "uuid": "959fed49-756e-4938-941c-1a5ac6bb2a34", 00:11:04.322 "is_configured": true, 00:11:04.322 "data_offset": 2048, 00:11:04.322 "data_size": 63488 00:11:04.322 }, 00:11:04.322 { 00:11:04.322 "name": "BaseBdev4", 00:11:04.322 "uuid": "fd963a25-eaaf-46ab-8e37-fd4909f8acf7", 00:11:04.322 "is_configured": true, 00:11:04.322 "data_offset": 2048, 00:11:04.322 "data_size": 63488 00:11:04.323 } 00:11:04.323 ] 00:11:04.323 }' 00:11:04.323 04:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.323 04:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.892 04:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:04.892 04:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.892 04:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.892 04:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.892 04:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.892 04:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:04.892 04:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.892 04:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.892 04:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.892 04:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:04.892 04:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.892 04:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 260d5a6f-7052-4267-82f8-5aaffb8bba03 00:11:04.892 04:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.892 04:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.892 [2024-12-13 04:27:04.792854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:04.892 [2024-12-13 04:27:04.793149] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:11:04.892 [2024-12-13 04:27:04.793204] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:04.892 [2024-12-13 04:27:04.793530] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:11:04.892 NewBaseBdev 00:11:04.892 [2024-12-13 04:27:04.793727] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:11:04.892 [2024-12-13 04:27:04.793739] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:11:04.892 [2024-12-13 04:27:04.793860] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:04.892 04:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.892 04:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:04.892 04:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:04.892 04:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:04.892 04:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:04.892 04:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:04.892 04:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:04.892 04:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:04.892 04:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.892 04:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.892 04:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.892 04:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:04.892 04:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.892 04:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.892 [ 00:11:04.892 { 00:11:04.892 "name": "NewBaseBdev", 00:11:04.892 "aliases": [ 00:11:04.892 "260d5a6f-7052-4267-82f8-5aaffb8bba03" 00:11:04.892 ], 00:11:04.892 "product_name": "Malloc disk", 00:11:04.892 "block_size": 512, 00:11:04.892 "num_blocks": 65536, 00:11:04.892 "uuid": "260d5a6f-7052-4267-82f8-5aaffb8bba03", 00:11:04.892 "assigned_rate_limits": { 00:11:04.892 "rw_ios_per_sec": 0, 00:11:04.892 "rw_mbytes_per_sec": 0, 00:11:04.892 "r_mbytes_per_sec": 0, 00:11:04.892 "w_mbytes_per_sec": 0 00:11:04.892 }, 00:11:04.892 "claimed": true, 00:11:04.892 "claim_type": "exclusive_write", 00:11:04.892 "zoned": false, 00:11:04.892 "supported_io_types": { 00:11:04.892 "read": true, 00:11:04.892 "write": true, 00:11:04.892 "unmap": true, 00:11:04.892 "flush": true, 00:11:04.892 "reset": true, 00:11:04.892 "nvme_admin": false, 00:11:04.892 "nvme_io": false, 00:11:04.892 "nvme_io_md": false, 00:11:04.892 "write_zeroes": true, 00:11:04.892 "zcopy": true, 00:11:04.892 "get_zone_info": false, 00:11:04.892 "zone_management": false, 00:11:04.892 "zone_append": false, 00:11:04.892 "compare": false, 00:11:04.892 "compare_and_write": false, 00:11:04.892 "abort": true, 00:11:04.892 "seek_hole": false, 00:11:04.892 "seek_data": false, 00:11:04.892 "copy": true, 00:11:04.892 "nvme_iov_md": false 00:11:04.892 }, 00:11:04.892 "memory_domains": [ 00:11:04.892 { 00:11:04.892 "dma_device_id": "system", 00:11:04.892 "dma_device_type": 1 00:11:04.892 }, 00:11:04.892 { 00:11:04.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.892 "dma_device_type": 2 00:11:04.892 } 00:11:04.892 ], 00:11:04.892 "driver_specific": {} 00:11:04.892 } 00:11:04.892 ] 00:11:04.892 04:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.892 04:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:04.892 04:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:04.892 04:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.892 04:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:04.892 04:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:04.892 04:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:04.892 04:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.892 04:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.892 04:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.892 04:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.892 04:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.892 04:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.892 04:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.892 04:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.892 04:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.892 04:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.892 04:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.892 "name": "Existed_Raid", 00:11:04.892 "uuid": "6500c547-efda-4a61-b88f-7a41ab0a1bcb", 00:11:04.892 "strip_size_kb": 0, 00:11:04.892 "state": "online", 00:11:04.892 "raid_level": "raid1", 00:11:04.892 "superblock": true, 00:11:04.892 "num_base_bdevs": 4, 00:11:04.892 "num_base_bdevs_discovered": 4, 00:11:04.892 "num_base_bdevs_operational": 4, 00:11:04.892 "base_bdevs_list": [ 00:11:04.892 { 00:11:04.892 "name": "NewBaseBdev", 00:11:04.892 "uuid": "260d5a6f-7052-4267-82f8-5aaffb8bba03", 00:11:04.892 "is_configured": true, 00:11:04.892 "data_offset": 2048, 00:11:04.892 "data_size": 63488 00:11:04.892 }, 00:11:04.892 { 00:11:04.892 "name": "BaseBdev2", 00:11:04.892 "uuid": "04bb9be0-3fc8-4885-ab3e-801520d8e9f4", 00:11:04.892 "is_configured": true, 00:11:04.892 "data_offset": 2048, 00:11:04.892 "data_size": 63488 00:11:04.892 }, 00:11:04.893 { 00:11:04.893 "name": "BaseBdev3", 00:11:04.893 "uuid": "959fed49-756e-4938-941c-1a5ac6bb2a34", 00:11:04.893 "is_configured": true, 00:11:04.893 "data_offset": 2048, 00:11:04.893 "data_size": 63488 00:11:04.893 }, 00:11:04.893 { 00:11:04.893 "name": "BaseBdev4", 00:11:04.893 "uuid": "fd963a25-eaaf-46ab-8e37-fd4909f8acf7", 00:11:04.893 "is_configured": true, 00:11:04.893 "data_offset": 2048, 00:11:04.893 "data_size": 63488 00:11:04.893 } 00:11:04.893 ] 00:11:04.893 }' 00:11:04.893 04:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.893 04:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.461 04:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:05.461 04:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:05.461 04:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:05.461 04:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:05.461 04:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:05.461 04:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:05.461 04:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:05.461 04:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:05.461 04:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.461 04:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.461 [2024-12-13 04:27:05.236455] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:05.461 04:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.461 04:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:05.461 "name": "Existed_Raid", 00:11:05.461 "aliases": [ 00:11:05.461 "6500c547-efda-4a61-b88f-7a41ab0a1bcb" 00:11:05.461 ], 00:11:05.461 "product_name": "Raid Volume", 00:11:05.461 "block_size": 512, 00:11:05.461 "num_blocks": 63488, 00:11:05.461 "uuid": "6500c547-efda-4a61-b88f-7a41ab0a1bcb", 00:11:05.461 "assigned_rate_limits": { 00:11:05.461 "rw_ios_per_sec": 0, 00:11:05.461 "rw_mbytes_per_sec": 0, 00:11:05.461 "r_mbytes_per_sec": 0, 00:11:05.461 "w_mbytes_per_sec": 0 00:11:05.461 }, 00:11:05.461 "claimed": false, 00:11:05.461 "zoned": false, 00:11:05.461 "supported_io_types": { 00:11:05.461 "read": true, 00:11:05.461 "write": true, 00:11:05.461 "unmap": false, 00:11:05.461 "flush": false, 00:11:05.461 "reset": true, 00:11:05.461 "nvme_admin": false, 00:11:05.461 "nvme_io": false, 00:11:05.461 "nvme_io_md": false, 00:11:05.461 "write_zeroes": true, 00:11:05.461 "zcopy": false, 00:11:05.461 "get_zone_info": false, 00:11:05.461 "zone_management": false, 00:11:05.461 "zone_append": false, 00:11:05.461 "compare": false, 00:11:05.461 "compare_and_write": false, 00:11:05.461 "abort": false, 00:11:05.461 "seek_hole": false, 00:11:05.461 "seek_data": false, 00:11:05.461 "copy": false, 00:11:05.461 "nvme_iov_md": false 00:11:05.461 }, 00:11:05.461 "memory_domains": [ 00:11:05.461 { 00:11:05.461 "dma_device_id": "system", 00:11:05.461 "dma_device_type": 1 00:11:05.461 }, 00:11:05.461 { 00:11:05.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.461 "dma_device_type": 2 00:11:05.461 }, 00:11:05.461 { 00:11:05.461 "dma_device_id": "system", 00:11:05.461 "dma_device_type": 1 00:11:05.461 }, 00:11:05.461 { 00:11:05.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.461 "dma_device_type": 2 00:11:05.461 }, 00:11:05.461 { 00:11:05.461 "dma_device_id": "system", 00:11:05.461 "dma_device_type": 1 00:11:05.461 }, 00:11:05.461 { 00:11:05.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.461 "dma_device_type": 2 00:11:05.461 }, 00:11:05.461 { 00:11:05.461 "dma_device_id": "system", 00:11:05.461 "dma_device_type": 1 00:11:05.461 }, 00:11:05.461 { 00:11:05.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.461 "dma_device_type": 2 00:11:05.461 } 00:11:05.461 ], 00:11:05.461 "driver_specific": { 00:11:05.461 "raid": { 00:11:05.461 "uuid": "6500c547-efda-4a61-b88f-7a41ab0a1bcb", 00:11:05.461 "strip_size_kb": 0, 00:11:05.461 "state": "online", 00:11:05.461 "raid_level": "raid1", 00:11:05.461 "superblock": true, 00:11:05.461 "num_base_bdevs": 4, 00:11:05.461 "num_base_bdevs_discovered": 4, 00:11:05.461 "num_base_bdevs_operational": 4, 00:11:05.461 "base_bdevs_list": [ 00:11:05.461 { 00:11:05.461 "name": "NewBaseBdev", 00:11:05.461 "uuid": "260d5a6f-7052-4267-82f8-5aaffb8bba03", 00:11:05.461 "is_configured": true, 00:11:05.461 "data_offset": 2048, 00:11:05.461 "data_size": 63488 00:11:05.461 }, 00:11:05.461 { 00:11:05.461 "name": "BaseBdev2", 00:11:05.461 "uuid": "04bb9be0-3fc8-4885-ab3e-801520d8e9f4", 00:11:05.461 "is_configured": true, 00:11:05.461 "data_offset": 2048, 00:11:05.461 "data_size": 63488 00:11:05.461 }, 00:11:05.461 { 00:11:05.461 "name": "BaseBdev3", 00:11:05.461 "uuid": "959fed49-756e-4938-941c-1a5ac6bb2a34", 00:11:05.461 "is_configured": true, 00:11:05.461 "data_offset": 2048, 00:11:05.461 "data_size": 63488 00:11:05.461 }, 00:11:05.461 { 00:11:05.461 "name": "BaseBdev4", 00:11:05.461 "uuid": "fd963a25-eaaf-46ab-8e37-fd4909f8acf7", 00:11:05.461 "is_configured": true, 00:11:05.461 "data_offset": 2048, 00:11:05.461 "data_size": 63488 00:11:05.461 } 00:11:05.461 ] 00:11:05.461 } 00:11:05.461 } 00:11:05.461 }' 00:11:05.461 04:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:05.461 04:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:05.461 BaseBdev2 00:11:05.461 BaseBdev3 00:11:05.461 BaseBdev4' 00:11:05.461 04:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.461 04:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:05.461 04:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.461 04:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:05.461 04:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.461 04:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.461 04:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.461 04:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.461 04:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.461 04:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.461 04:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.461 04:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:05.462 04:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.462 04:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.462 04:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.462 04:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.462 04:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.462 04:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.462 04:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.462 04:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.462 04:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:05.462 04:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.462 04:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.721 04:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.721 04:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.721 04:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.721 04:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.721 04:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:05.721 04:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.721 04:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.721 04:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.721 04:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.721 04:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.721 04:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.721 04:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:05.721 04:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.721 04:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.721 [2024-12-13 04:27:05.547597] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:05.721 [2024-12-13 04:27:05.547671] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:05.721 [2024-12-13 04:27:05.547794] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:05.721 [2024-12-13 04:27:05.548101] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:05.721 [2024-12-13 04:27:05.548158] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:11:05.721 04:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.721 04:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 86324 00:11:05.721 04:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 86324 ']' 00:11:05.721 04:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 86324 00:11:05.721 04:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:05.721 04:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:05.721 04:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86324 00:11:05.721 killing process with pid 86324 00:11:05.721 04:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:05.721 04:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:05.721 04:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86324' 00:11:05.721 04:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 86324 00:11:05.721 [2024-12-13 04:27:05.595933] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:05.721 04:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 86324 00:11:05.721 [2024-12-13 04:27:05.671594] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:06.289 04:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:06.289 00:11:06.289 real 0m9.762s 00:11:06.289 user 0m16.358s 00:11:06.289 sys 0m2.192s 00:11:06.289 04:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:06.289 ************************************ 00:11:06.289 END TEST raid_state_function_test_sb 00:11:06.289 ************************************ 00:11:06.289 04:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.289 04:27:06 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:11:06.289 04:27:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:06.289 04:27:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:06.289 04:27:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:06.289 ************************************ 00:11:06.289 START TEST raid_superblock_test 00:11:06.289 ************************************ 00:11:06.289 04:27:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:11:06.289 04:27:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:06.289 04:27:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:06.289 04:27:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:06.289 04:27:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:06.289 04:27:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:06.289 04:27:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:06.289 04:27:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:06.289 04:27:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:06.289 04:27:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:06.289 04:27:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:06.289 04:27:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:06.289 04:27:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:06.289 04:27:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:06.289 04:27:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:06.289 04:27:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:06.289 04:27:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=86978 00:11:06.289 04:27:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:06.289 04:27:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 86978 00:11:06.289 04:27:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 86978 ']' 00:11:06.289 04:27:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.289 04:27:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:06.289 04:27:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.289 04:27:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:06.289 04:27:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.289 [2024-12-13 04:27:06.165785] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:11:06.289 [2024-12-13 04:27:06.166025] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86978 ] 00:11:06.548 [2024-12-13 04:27:06.322368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.548 [2024-12-13 04:27:06.362105] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.548 [2024-12-13 04:27:06.438466] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:06.548 [2024-12-13 04:27:06.438608] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:07.117 04:27:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:07.117 04:27:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:07.117 04:27:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:07.117 04:27:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:07.117 04:27:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:07.117 04:27:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:07.117 04:27:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:07.117 04:27:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:07.117 04:27:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:07.117 04:27:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:07.117 04:27:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:07.117 04:27:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.117 04:27:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.117 malloc1 00:11:07.117 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.117 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:07.117 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.117 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.117 [2024-12-13 04:27:07.015747] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:07.117 [2024-12-13 04:27:07.015830] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.117 [2024-12-13 04:27:07.015857] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:11:07.117 [2024-12-13 04:27:07.015879] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.117 [2024-12-13 04:27:07.018293] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.117 [2024-12-13 04:27:07.018400] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:07.117 pt1 00:11:07.117 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.117 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:07.117 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:07.117 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:07.117 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:07.117 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:07.117 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:07.117 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:07.117 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:07.117 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:07.117 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.117 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.117 malloc2 00:11:07.117 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.117 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:07.117 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.117 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.117 [2024-12-13 04:27:07.050251] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:07.117 [2024-12-13 04:27:07.050383] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.117 [2024-12-13 04:27:07.050422] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:07.117 [2024-12-13 04:27:07.050467] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.117 [2024-12-13 04:27:07.052829] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.117 [2024-12-13 04:27:07.052905] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:07.117 pt2 00:11:07.117 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.117 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:07.117 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:07.117 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:07.117 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:07.117 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:07.118 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:07.118 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:07.118 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:07.118 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:07.118 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.118 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.118 malloc3 00:11:07.118 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.118 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:07.118 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.118 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.118 [2024-12-13 04:27:07.088654] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:07.118 [2024-12-13 04:27:07.088781] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.118 [2024-12-13 04:27:07.088822] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:07.118 [2024-12-13 04:27:07.088855] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.118 [2024-12-13 04:27:07.091257] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.118 [2024-12-13 04:27:07.091328] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:07.118 pt3 00:11:07.118 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.118 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:07.118 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:07.118 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:07.118 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:07.118 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:07.118 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:07.118 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:07.118 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:07.118 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:07.118 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.118 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.378 malloc4 00:11:07.378 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.378 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:07.378 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.378 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.378 [2024-12-13 04:27:07.145035] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:07.378 [2024-12-13 04:27:07.145135] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.378 [2024-12-13 04:27:07.145170] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:07.378 [2024-12-13 04:27:07.145197] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.378 [2024-12-13 04:27:07.148626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.378 [2024-12-13 04:27:07.148677] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:07.378 pt4 00:11:07.378 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.378 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:07.378 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:07.378 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:07.378 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.378 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.378 [2024-12-13 04:27:07.156945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:07.378 [2024-12-13 04:27:07.159096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:07.378 [2024-12-13 04:27:07.159169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:07.378 [2024-12-13 04:27:07.159244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:07.378 [2024-12-13 04:27:07.159413] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:11:07.378 [2024-12-13 04:27:07.159427] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:07.378 [2024-12-13 04:27:07.159709] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:11:07.378 [2024-12-13 04:27:07.159955] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:11:07.378 [2024-12-13 04:27:07.159970] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:11:07.378 [2024-12-13 04:27:07.160099] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:07.378 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.378 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:07.378 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:07.378 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:07.378 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:07.378 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:07.378 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.378 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.378 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.378 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.378 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.378 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.378 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:07.378 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.378 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.378 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.378 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.378 "name": "raid_bdev1", 00:11:07.378 "uuid": "82f2ed7f-7dfb-496e-81a3-41e8466a8bf9", 00:11:07.378 "strip_size_kb": 0, 00:11:07.378 "state": "online", 00:11:07.378 "raid_level": "raid1", 00:11:07.378 "superblock": true, 00:11:07.378 "num_base_bdevs": 4, 00:11:07.378 "num_base_bdevs_discovered": 4, 00:11:07.378 "num_base_bdevs_operational": 4, 00:11:07.378 "base_bdevs_list": [ 00:11:07.378 { 00:11:07.378 "name": "pt1", 00:11:07.378 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:07.378 "is_configured": true, 00:11:07.378 "data_offset": 2048, 00:11:07.378 "data_size": 63488 00:11:07.378 }, 00:11:07.378 { 00:11:07.378 "name": "pt2", 00:11:07.378 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:07.378 "is_configured": true, 00:11:07.378 "data_offset": 2048, 00:11:07.378 "data_size": 63488 00:11:07.378 }, 00:11:07.378 { 00:11:07.378 "name": "pt3", 00:11:07.378 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:07.378 "is_configured": true, 00:11:07.378 "data_offset": 2048, 00:11:07.378 "data_size": 63488 00:11:07.378 }, 00:11:07.378 { 00:11:07.378 "name": "pt4", 00:11:07.378 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:07.378 "is_configured": true, 00:11:07.378 "data_offset": 2048, 00:11:07.378 "data_size": 63488 00:11:07.378 } 00:11:07.378 ] 00:11:07.378 }' 00:11:07.378 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.378 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.638 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:07.638 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:07.638 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:07.638 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:07.638 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:07.638 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:07.638 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:07.638 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:07.638 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.638 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.638 [2024-12-13 04:27:07.588647] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:07.638 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.638 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:07.638 "name": "raid_bdev1", 00:11:07.638 "aliases": [ 00:11:07.638 "82f2ed7f-7dfb-496e-81a3-41e8466a8bf9" 00:11:07.638 ], 00:11:07.638 "product_name": "Raid Volume", 00:11:07.638 "block_size": 512, 00:11:07.638 "num_blocks": 63488, 00:11:07.638 "uuid": "82f2ed7f-7dfb-496e-81a3-41e8466a8bf9", 00:11:07.638 "assigned_rate_limits": { 00:11:07.638 "rw_ios_per_sec": 0, 00:11:07.638 "rw_mbytes_per_sec": 0, 00:11:07.638 "r_mbytes_per_sec": 0, 00:11:07.638 "w_mbytes_per_sec": 0 00:11:07.638 }, 00:11:07.638 "claimed": false, 00:11:07.638 "zoned": false, 00:11:07.638 "supported_io_types": { 00:11:07.638 "read": true, 00:11:07.638 "write": true, 00:11:07.638 "unmap": false, 00:11:07.638 "flush": false, 00:11:07.638 "reset": true, 00:11:07.638 "nvme_admin": false, 00:11:07.638 "nvme_io": false, 00:11:07.638 "nvme_io_md": false, 00:11:07.638 "write_zeroes": true, 00:11:07.638 "zcopy": false, 00:11:07.638 "get_zone_info": false, 00:11:07.638 "zone_management": false, 00:11:07.638 "zone_append": false, 00:11:07.638 "compare": false, 00:11:07.638 "compare_and_write": false, 00:11:07.638 "abort": false, 00:11:07.638 "seek_hole": false, 00:11:07.638 "seek_data": false, 00:11:07.638 "copy": false, 00:11:07.638 "nvme_iov_md": false 00:11:07.638 }, 00:11:07.638 "memory_domains": [ 00:11:07.638 { 00:11:07.638 "dma_device_id": "system", 00:11:07.638 "dma_device_type": 1 00:11:07.638 }, 00:11:07.638 { 00:11:07.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.638 "dma_device_type": 2 00:11:07.638 }, 00:11:07.638 { 00:11:07.638 "dma_device_id": "system", 00:11:07.638 "dma_device_type": 1 00:11:07.638 }, 00:11:07.638 { 00:11:07.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.638 "dma_device_type": 2 00:11:07.638 }, 00:11:07.638 { 00:11:07.638 "dma_device_id": "system", 00:11:07.638 "dma_device_type": 1 00:11:07.638 }, 00:11:07.638 { 00:11:07.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.638 "dma_device_type": 2 00:11:07.638 }, 00:11:07.638 { 00:11:07.638 "dma_device_id": "system", 00:11:07.638 "dma_device_type": 1 00:11:07.638 }, 00:11:07.638 { 00:11:07.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.638 "dma_device_type": 2 00:11:07.638 } 00:11:07.638 ], 00:11:07.638 "driver_specific": { 00:11:07.638 "raid": { 00:11:07.638 "uuid": "82f2ed7f-7dfb-496e-81a3-41e8466a8bf9", 00:11:07.638 "strip_size_kb": 0, 00:11:07.638 "state": "online", 00:11:07.638 "raid_level": "raid1", 00:11:07.638 "superblock": true, 00:11:07.638 "num_base_bdevs": 4, 00:11:07.638 "num_base_bdevs_discovered": 4, 00:11:07.638 "num_base_bdevs_operational": 4, 00:11:07.638 "base_bdevs_list": [ 00:11:07.638 { 00:11:07.638 "name": "pt1", 00:11:07.638 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:07.638 "is_configured": true, 00:11:07.638 "data_offset": 2048, 00:11:07.638 "data_size": 63488 00:11:07.638 }, 00:11:07.638 { 00:11:07.638 "name": "pt2", 00:11:07.638 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:07.638 "is_configured": true, 00:11:07.638 "data_offset": 2048, 00:11:07.638 "data_size": 63488 00:11:07.638 }, 00:11:07.638 { 00:11:07.638 "name": "pt3", 00:11:07.638 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:07.638 "is_configured": true, 00:11:07.638 "data_offset": 2048, 00:11:07.638 "data_size": 63488 00:11:07.638 }, 00:11:07.638 { 00:11:07.638 "name": "pt4", 00:11:07.638 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:07.638 "is_configured": true, 00:11:07.638 "data_offset": 2048, 00:11:07.638 "data_size": 63488 00:11:07.638 } 00:11:07.638 ] 00:11:07.638 } 00:11:07.638 } 00:11:07.638 }' 00:11:07.638 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:07.898 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:07.898 pt2 00:11:07.898 pt3 00:11:07.898 pt4' 00:11:07.898 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.898 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:07.898 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.898 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:07.898 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.898 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.898 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.898 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.898 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.898 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.898 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.898 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:07.898 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.898 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.898 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.898 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.898 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.898 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.898 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.898 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:07.898 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.898 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.898 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.898 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.898 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.898 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.898 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.898 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.898 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:07.898 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.898 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.898 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.898 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.898 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.898 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:07.898 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:07.898 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.898 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.158 [2024-12-13 04:27:07.915899] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:08.158 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.158 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=82f2ed7f-7dfb-496e-81a3-41e8466a8bf9 00:11:08.158 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 82f2ed7f-7dfb-496e-81a3-41e8466a8bf9 ']' 00:11:08.158 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:08.158 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.158 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.158 [2024-12-13 04:27:07.963555] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:08.158 [2024-12-13 04:27:07.963635] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:08.158 [2024-12-13 04:27:07.963744] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:08.158 [2024-12-13 04:27:07.963884] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:08.158 [2024-12-13 04:27:07.963931] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:11:08.158 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.158 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:08.158 04:27:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.158 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.158 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.158 04:27:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.158 04:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:08.158 04:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:08.158 04:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:08.158 04:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:08.158 04:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.158 04:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.158 04:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.158 04:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:08.158 04:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:08.158 04:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.158 04:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.158 04:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.158 04:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:08.158 04:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:08.158 04:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.158 04:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.158 04:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.158 04:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:08.158 04:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:08.158 04:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.158 04:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.158 04:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.158 04:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:08.158 04:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:08.158 04:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.158 04:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.158 04:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.158 04:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:08.158 04:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:08.158 04:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:08.158 04:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:08.158 04:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:08.158 04:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:08.158 04:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:08.158 04:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:08.158 04:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:08.158 04:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.158 04:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.158 [2024-12-13 04:27:08.111334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:08.158 [2024-12-13 04:27:08.113551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:08.158 [2024-12-13 04:27:08.113653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:08.158 [2024-12-13 04:27:08.113688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:08.158 [2024-12-13 04:27:08.113740] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:08.158 [2024-12-13 04:27:08.113779] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:08.158 [2024-12-13 04:27:08.113798] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:08.158 [2024-12-13 04:27:08.113813] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:08.158 [2024-12-13 04:27:08.113827] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:08.158 [2024-12-13 04:27:08.113836] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:11:08.158 request: 00:11:08.158 { 00:11:08.158 "name": "raid_bdev1", 00:11:08.158 "raid_level": "raid1", 00:11:08.158 "base_bdevs": [ 00:11:08.158 "malloc1", 00:11:08.158 "malloc2", 00:11:08.158 "malloc3", 00:11:08.158 "malloc4" 00:11:08.158 ], 00:11:08.158 "superblock": false, 00:11:08.158 "method": "bdev_raid_create", 00:11:08.158 "req_id": 1 00:11:08.158 } 00:11:08.158 Got JSON-RPC error response 00:11:08.158 response: 00:11:08.158 { 00:11:08.158 "code": -17, 00:11:08.159 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:08.159 } 00:11:08.159 04:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:08.159 04:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:08.159 04:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:08.159 04:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:08.159 04:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:08.159 04:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.159 04:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.159 04:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.159 04:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:08.159 04:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.159 04:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:08.159 04:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:08.159 04:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:08.159 04:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.159 04:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.419 [2024-12-13 04:27:08.175204] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:08.419 [2024-12-13 04:27:08.175252] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.419 [2024-12-13 04:27:08.175292] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:08.419 [2024-12-13 04:27:08.175301] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.419 [2024-12-13 04:27:08.177785] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.419 [2024-12-13 04:27:08.177820] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:08.419 [2024-12-13 04:27:08.177904] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:08.419 [2024-12-13 04:27:08.177935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:08.419 pt1 00:11:08.419 04:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.419 04:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:08.419 04:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:08.419 04:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.419 04:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:08.419 04:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:08.419 04:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.419 04:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.419 04:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.419 04:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.419 04:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.419 04:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.419 04:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:08.419 04:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.419 04:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.419 04:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.419 04:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.419 "name": "raid_bdev1", 00:11:08.419 "uuid": "82f2ed7f-7dfb-496e-81a3-41e8466a8bf9", 00:11:08.419 "strip_size_kb": 0, 00:11:08.419 "state": "configuring", 00:11:08.419 "raid_level": "raid1", 00:11:08.419 "superblock": true, 00:11:08.419 "num_base_bdevs": 4, 00:11:08.419 "num_base_bdevs_discovered": 1, 00:11:08.419 "num_base_bdevs_operational": 4, 00:11:08.419 "base_bdevs_list": [ 00:11:08.419 { 00:11:08.419 "name": "pt1", 00:11:08.419 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:08.419 "is_configured": true, 00:11:08.419 "data_offset": 2048, 00:11:08.419 "data_size": 63488 00:11:08.419 }, 00:11:08.419 { 00:11:08.419 "name": null, 00:11:08.419 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:08.419 "is_configured": false, 00:11:08.419 "data_offset": 2048, 00:11:08.419 "data_size": 63488 00:11:08.419 }, 00:11:08.419 { 00:11:08.419 "name": null, 00:11:08.419 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:08.419 "is_configured": false, 00:11:08.419 "data_offset": 2048, 00:11:08.419 "data_size": 63488 00:11:08.419 }, 00:11:08.419 { 00:11:08.419 "name": null, 00:11:08.419 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:08.419 "is_configured": false, 00:11:08.419 "data_offset": 2048, 00:11:08.419 "data_size": 63488 00:11:08.419 } 00:11:08.419 ] 00:11:08.419 }' 00:11:08.419 04:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.419 04:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.679 04:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:08.679 04:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:08.679 04:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.679 04:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.679 [2024-12-13 04:27:08.590530] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:08.679 [2024-12-13 04:27:08.590636] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.679 [2024-12-13 04:27:08.590678] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:08.679 [2024-12-13 04:27:08.590704] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.679 [2024-12-13 04:27:08.591144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.679 [2024-12-13 04:27:08.591200] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:08.679 [2024-12-13 04:27:08.591291] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:08.679 [2024-12-13 04:27:08.591339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:08.679 pt2 00:11:08.679 04:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.679 04:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:08.679 04:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.679 04:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.679 [2024-12-13 04:27:08.602556] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:08.679 04:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.679 04:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:08.679 04:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:08.679 04:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.679 04:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:08.679 04:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:08.679 04:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.679 04:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.679 04:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.679 04:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.679 04:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.679 04:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.679 04:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.679 04:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.679 04:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:08.679 04:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.679 04:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.679 "name": "raid_bdev1", 00:11:08.679 "uuid": "82f2ed7f-7dfb-496e-81a3-41e8466a8bf9", 00:11:08.679 "strip_size_kb": 0, 00:11:08.679 "state": "configuring", 00:11:08.679 "raid_level": "raid1", 00:11:08.679 "superblock": true, 00:11:08.679 "num_base_bdevs": 4, 00:11:08.679 "num_base_bdevs_discovered": 1, 00:11:08.679 "num_base_bdevs_operational": 4, 00:11:08.679 "base_bdevs_list": [ 00:11:08.679 { 00:11:08.679 "name": "pt1", 00:11:08.679 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:08.679 "is_configured": true, 00:11:08.679 "data_offset": 2048, 00:11:08.679 "data_size": 63488 00:11:08.679 }, 00:11:08.679 { 00:11:08.679 "name": null, 00:11:08.679 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:08.679 "is_configured": false, 00:11:08.679 "data_offset": 0, 00:11:08.679 "data_size": 63488 00:11:08.679 }, 00:11:08.679 { 00:11:08.679 "name": null, 00:11:08.679 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:08.679 "is_configured": false, 00:11:08.679 "data_offset": 2048, 00:11:08.679 "data_size": 63488 00:11:08.679 }, 00:11:08.679 { 00:11:08.679 "name": null, 00:11:08.679 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:08.679 "is_configured": false, 00:11:08.679 "data_offset": 2048, 00:11:08.679 "data_size": 63488 00:11:08.679 } 00:11:08.679 ] 00:11:08.679 }' 00:11:08.679 04:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.679 04:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.248 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:09.248 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:09.248 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:09.248 04:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.248 04:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.248 [2024-12-13 04:27:09.053747] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:09.248 [2024-12-13 04:27:09.053867] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.248 [2024-12-13 04:27:09.053889] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:09.248 [2024-12-13 04:27:09.053901] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.248 [2024-12-13 04:27:09.054299] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.248 [2024-12-13 04:27:09.054320] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:09.248 [2024-12-13 04:27:09.054389] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:09.248 [2024-12-13 04:27:09.054411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:09.248 pt2 00:11:09.248 04:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.248 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:09.248 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:09.248 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:09.248 04:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.248 04:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.248 [2024-12-13 04:27:09.065686] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:09.248 [2024-12-13 04:27:09.065735] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.248 [2024-12-13 04:27:09.065750] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:09.248 [2024-12-13 04:27:09.065761] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.248 [2024-12-13 04:27:09.066115] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.248 [2024-12-13 04:27:09.066133] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:09.248 [2024-12-13 04:27:09.066184] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:09.248 [2024-12-13 04:27:09.066213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:09.248 pt3 00:11:09.248 04:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.248 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:09.248 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:09.248 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:09.248 04:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.248 04:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.248 [2024-12-13 04:27:09.077668] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:09.248 [2024-12-13 04:27:09.077718] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.248 [2024-12-13 04:27:09.077731] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:11:09.248 [2024-12-13 04:27:09.077741] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.248 [2024-12-13 04:27:09.078038] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.248 [2024-12-13 04:27:09.078058] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:09.248 [2024-12-13 04:27:09.078107] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:09.248 [2024-12-13 04:27:09.078126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:09.248 [2024-12-13 04:27:09.078236] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:11:09.248 [2024-12-13 04:27:09.078253] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:09.248 [2024-12-13 04:27:09.078510] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:11:09.248 [2024-12-13 04:27:09.078665] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:11:09.248 [2024-12-13 04:27:09.078675] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:11:09.248 [2024-12-13 04:27:09.078779] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:09.248 pt4 00:11:09.248 04:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.248 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:09.248 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:09.248 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:09.248 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:09.248 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:09.248 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:09.248 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:09.248 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.248 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.248 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.248 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.248 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.249 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.249 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.249 04:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.249 04:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.249 04:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.249 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.249 "name": "raid_bdev1", 00:11:09.249 "uuid": "82f2ed7f-7dfb-496e-81a3-41e8466a8bf9", 00:11:09.249 "strip_size_kb": 0, 00:11:09.249 "state": "online", 00:11:09.249 "raid_level": "raid1", 00:11:09.249 "superblock": true, 00:11:09.249 "num_base_bdevs": 4, 00:11:09.249 "num_base_bdevs_discovered": 4, 00:11:09.249 "num_base_bdevs_operational": 4, 00:11:09.249 "base_bdevs_list": [ 00:11:09.249 { 00:11:09.249 "name": "pt1", 00:11:09.249 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:09.249 "is_configured": true, 00:11:09.249 "data_offset": 2048, 00:11:09.249 "data_size": 63488 00:11:09.249 }, 00:11:09.249 { 00:11:09.249 "name": "pt2", 00:11:09.249 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:09.249 "is_configured": true, 00:11:09.249 "data_offset": 2048, 00:11:09.249 "data_size": 63488 00:11:09.249 }, 00:11:09.249 { 00:11:09.249 "name": "pt3", 00:11:09.249 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:09.249 "is_configured": true, 00:11:09.249 "data_offset": 2048, 00:11:09.249 "data_size": 63488 00:11:09.249 }, 00:11:09.249 { 00:11:09.249 "name": "pt4", 00:11:09.249 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:09.249 "is_configured": true, 00:11:09.249 "data_offset": 2048, 00:11:09.249 "data_size": 63488 00:11:09.249 } 00:11:09.249 ] 00:11:09.249 }' 00:11:09.249 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.249 04:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.508 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:09.508 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:09.768 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:09.768 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:09.768 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:09.768 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:09.768 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:09.768 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:09.768 04:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.768 04:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.768 [2024-12-13 04:27:09.537219] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:09.768 04:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.768 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:09.768 "name": "raid_bdev1", 00:11:09.768 "aliases": [ 00:11:09.768 "82f2ed7f-7dfb-496e-81a3-41e8466a8bf9" 00:11:09.768 ], 00:11:09.768 "product_name": "Raid Volume", 00:11:09.768 "block_size": 512, 00:11:09.768 "num_blocks": 63488, 00:11:09.768 "uuid": "82f2ed7f-7dfb-496e-81a3-41e8466a8bf9", 00:11:09.768 "assigned_rate_limits": { 00:11:09.768 "rw_ios_per_sec": 0, 00:11:09.768 "rw_mbytes_per_sec": 0, 00:11:09.768 "r_mbytes_per_sec": 0, 00:11:09.768 "w_mbytes_per_sec": 0 00:11:09.768 }, 00:11:09.768 "claimed": false, 00:11:09.768 "zoned": false, 00:11:09.768 "supported_io_types": { 00:11:09.768 "read": true, 00:11:09.768 "write": true, 00:11:09.768 "unmap": false, 00:11:09.768 "flush": false, 00:11:09.768 "reset": true, 00:11:09.768 "nvme_admin": false, 00:11:09.768 "nvme_io": false, 00:11:09.768 "nvme_io_md": false, 00:11:09.768 "write_zeroes": true, 00:11:09.768 "zcopy": false, 00:11:09.768 "get_zone_info": false, 00:11:09.768 "zone_management": false, 00:11:09.768 "zone_append": false, 00:11:09.768 "compare": false, 00:11:09.768 "compare_and_write": false, 00:11:09.768 "abort": false, 00:11:09.768 "seek_hole": false, 00:11:09.768 "seek_data": false, 00:11:09.768 "copy": false, 00:11:09.768 "nvme_iov_md": false 00:11:09.768 }, 00:11:09.768 "memory_domains": [ 00:11:09.768 { 00:11:09.768 "dma_device_id": "system", 00:11:09.768 "dma_device_type": 1 00:11:09.768 }, 00:11:09.768 { 00:11:09.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.768 "dma_device_type": 2 00:11:09.768 }, 00:11:09.768 { 00:11:09.768 "dma_device_id": "system", 00:11:09.768 "dma_device_type": 1 00:11:09.768 }, 00:11:09.768 { 00:11:09.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.768 "dma_device_type": 2 00:11:09.768 }, 00:11:09.768 { 00:11:09.768 "dma_device_id": "system", 00:11:09.768 "dma_device_type": 1 00:11:09.768 }, 00:11:09.768 { 00:11:09.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.768 "dma_device_type": 2 00:11:09.768 }, 00:11:09.768 { 00:11:09.768 "dma_device_id": "system", 00:11:09.768 "dma_device_type": 1 00:11:09.768 }, 00:11:09.768 { 00:11:09.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.768 "dma_device_type": 2 00:11:09.768 } 00:11:09.768 ], 00:11:09.768 "driver_specific": { 00:11:09.768 "raid": { 00:11:09.768 "uuid": "82f2ed7f-7dfb-496e-81a3-41e8466a8bf9", 00:11:09.768 "strip_size_kb": 0, 00:11:09.768 "state": "online", 00:11:09.768 "raid_level": "raid1", 00:11:09.768 "superblock": true, 00:11:09.768 "num_base_bdevs": 4, 00:11:09.768 "num_base_bdevs_discovered": 4, 00:11:09.768 "num_base_bdevs_operational": 4, 00:11:09.768 "base_bdevs_list": [ 00:11:09.768 { 00:11:09.768 "name": "pt1", 00:11:09.768 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:09.768 "is_configured": true, 00:11:09.768 "data_offset": 2048, 00:11:09.768 "data_size": 63488 00:11:09.768 }, 00:11:09.769 { 00:11:09.769 "name": "pt2", 00:11:09.769 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:09.769 "is_configured": true, 00:11:09.769 "data_offset": 2048, 00:11:09.769 "data_size": 63488 00:11:09.769 }, 00:11:09.769 { 00:11:09.769 "name": "pt3", 00:11:09.769 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:09.769 "is_configured": true, 00:11:09.769 "data_offset": 2048, 00:11:09.769 "data_size": 63488 00:11:09.769 }, 00:11:09.769 { 00:11:09.769 "name": "pt4", 00:11:09.769 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:09.769 "is_configured": true, 00:11:09.769 "data_offset": 2048, 00:11:09.769 "data_size": 63488 00:11:09.769 } 00:11:09.769 ] 00:11:09.769 } 00:11:09.769 } 00:11:09.769 }' 00:11:09.769 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:09.769 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:09.769 pt2 00:11:09.769 pt3 00:11:09.769 pt4' 00:11:09.769 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.769 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:09.769 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:09.769 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.769 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:09.769 04:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.769 04:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.769 04:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.769 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:09.769 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:09.769 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:09.769 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:09.769 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.769 04:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.769 04:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.769 04:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.769 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:09.769 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:09.769 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:09.769 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.769 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:09.769 04:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.769 04:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.769 04:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.769 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.029 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.029 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.029 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:10.029 04:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.029 04:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.029 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.029 04:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.029 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.029 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.029 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:10.029 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:10.029 04:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.029 04:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.029 [2024-12-13 04:27:09.848739] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:10.029 04:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.029 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 82f2ed7f-7dfb-496e-81a3-41e8466a8bf9 '!=' 82f2ed7f-7dfb-496e-81a3-41e8466a8bf9 ']' 00:11:10.029 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:10.029 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:10.029 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:10.029 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:10.029 04:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.029 04:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.029 [2024-12-13 04:27:09.888420] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:10.029 04:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.029 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:10.029 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:10.029 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:10.029 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:10.029 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:10.029 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:10.029 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.029 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.029 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.029 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.029 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.029 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.029 04:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.029 04:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.029 04:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.029 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.029 "name": "raid_bdev1", 00:11:10.029 "uuid": "82f2ed7f-7dfb-496e-81a3-41e8466a8bf9", 00:11:10.029 "strip_size_kb": 0, 00:11:10.029 "state": "online", 00:11:10.029 "raid_level": "raid1", 00:11:10.029 "superblock": true, 00:11:10.029 "num_base_bdevs": 4, 00:11:10.029 "num_base_bdevs_discovered": 3, 00:11:10.029 "num_base_bdevs_operational": 3, 00:11:10.029 "base_bdevs_list": [ 00:11:10.029 { 00:11:10.029 "name": null, 00:11:10.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.029 "is_configured": false, 00:11:10.029 "data_offset": 0, 00:11:10.029 "data_size": 63488 00:11:10.029 }, 00:11:10.029 { 00:11:10.029 "name": "pt2", 00:11:10.029 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:10.029 "is_configured": true, 00:11:10.029 "data_offset": 2048, 00:11:10.029 "data_size": 63488 00:11:10.029 }, 00:11:10.029 { 00:11:10.029 "name": "pt3", 00:11:10.029 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:10.029 "is_configured": true, 00:11:10.029 "data_offset": 2048, 00:11:10.029 "data_size": 63488 00:11:10.029 }, 00:11:10.029 { 00:11:10.029 "name": "pt4", 00:11:10.029 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:10.029 "is_configured": true, 00:11:10.029 "data_offset": 2048, 00:11:10.029 "data_size": 63488 00:11:10.029 } 00:11:10.029 ] 00:11:10.029 }' 00:11:10.029 04:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.029 04:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.288 04:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:10.288 04:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.288 04:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.288 [2024-12-13 04:27:10.271731] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:10.288 [2024-12-13 04:27:10.271837] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:10.288 [2024-12-13 04:27:10.271971] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:10.288 [2024-12-13 04:27:10.272085] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:10.288 [2024-12-13 04:27:10.272142] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:11:10.288 04:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.288 04:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:10.288 04:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.288 04:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.288 04:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.288 04:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.548 04:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:10.548 04:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:10.548 04:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:10.548 04:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:10.548 04:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:10.548 04:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.548 04:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.548 04:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.548 04:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:10.548 04:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:10.548 04:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:11:10.548 04:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.548 04:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.548 04:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.548 04:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:10.548 04:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:10.548 04:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:11:10.548 04:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.548 04:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.548 04:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.548 04:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:10.548 04:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:10.548 04:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:10.548 04:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:10.548 04:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:10.548 04:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.548 04:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.548 [2024-12-13 04:27:10.355564] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:10.548 [2024-12-13 04:27:10.355620] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.548 [2024-12-13 04:27:10.355653] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:10.548 [2024-12-13 04:27:10.355665] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.548 [2024-12-13 04:27:10.358218] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.548 [2024-12-13 04:27:10.358300] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:10.548 [2024-12-13 04:27:10.358380] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:10.548 [2024-12-13 04:27:10.358422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:10.548 pt2 00:11:10.548 04:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.548 04:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:10.548 04:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:10.548 04:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.548 04:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:10.548 04:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:10.548 04:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:10.548 04:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.548 04:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.548 04:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.548 04:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.548 04:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.548 04:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.548 04:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.548 04:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.548 04:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.548 04:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.548 "name": "raid_bdev1", 00:11:10.548 "uuid": "82f2ed7f-7dfb-496e-81a3-41e8466a8bf9", 00:11:10.548 "strip_size_kb": 0, 00:11:10.548 "state": "configuring", 00:11:10.548 "raid_level": "raid1", 00:11:10.548 "superblock": true, 00:11:10.548 "num_base_bdevs": 4, 00:11:10.548 "num_base_bdevs_discovered": 1, 00:11:10.548 "num_base_bdevs_operational": 3, 00:11:10.548 "base_bdevs_list": [ 00:11:10.548 { 00:11:10.548 "name": null, 00:11:10.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.548 "is_configured": false, 00:11:10.548 "data_offset": 2048, 00:11:10.548 "data_size": 63488 00:11:10.548 }, 00:11:10.548 { 00:11:10.548 "name": "pt2", 00:11:10.548 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:10.548 "is_configured": true, 00:11:10.548 "data_offset": 2048, 00:11:10.548 "data_size": 63488 00:11:10.549 }, 00:11:10.549 { 00:11:10.549 "name": null, 00:11:10.549 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:10.549 "is_configured": false, 00:11:10.549 "data_offset": 2048, 00:11:10.549 "data_size": 63488 00:11:10.549 }, 00:11:10.549 { 00:11:10.549 "name": null, 00:11:10.549 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:10.549 "is_configured": false, 00:11:10.549 "data_offset": 2048, 00:11:10.549 "data_size": 63488 00:11:10.549 } 00:11:10.549 ] 00:11:10.549 }' 00:11:10.549 04:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.549 04:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.808 04:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:10.808 04:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:10.808 04:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:10.808 04:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.808 04:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.808 [2024-12-13 04:27:10.754924] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:10.808 [2024-12-13 04:27:10.755087] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.808 [2024-12-13 04:27:10.755133] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:10.808 [2024-12-13 04:27:10.755169] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.808 [2024-12-13 04:27:10.755652] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.808 [2024-12-13 04:27:10.755718] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:10.808 [2024-12-13 04:27:10.755841] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:10.808 [2024-12-13 04:27:10.755902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:10.808 pt3 00:11:10.808 04:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.808 04:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:10.808 04:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:10.808 04:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.808 04:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:10.808 04:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:10.808 04:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:10.808 04:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.808 04:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.808 04:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.808 04:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.808 04:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.808 04:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.808 04:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.808 04:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.808 04:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.808 04:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.808 "name": "raid_bdev1", 00:11:10.808 "uuid": "82f2ed7f-7dfb-496e-81a3-41e8466a8bf9", 00:11:10.808 "strip_size_kb": 0, 00:11:10.808 "state": "configuring", 00:11:10.808 "raid_level": "raid1", 00:11:10.808 "superblock": true, 00:11:10.808 "num_base_bdevs": 4, 00:11:10.808 "num_base_bdevs_discovered": 2, 00:11:10.808 "num_base_bdevs_operational": 3, 00:11:10.808 "base_bdevs_list": [ 00:11:10.808 { 00:11:10.808 "name": null, 00:11:10.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.808 "is_configured": false, 00:11:10.808 "data_offset": 2048, 00:11:10.808 "data_size": 63488 00:11:10.808 }, 00:11:10.808 { 00:11:10.808 "name": "pt2", 00:11:10.808 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:10.808 "is_configured": true, 00:11:10.808 "data_offset": 2048, 00:11:10.808 "data_size": 63488 00:11:10.808 }, 00:11:10.808 { 00:11:10.808 "name": "pt3", 00:11:10.808 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:10.808 "is_configured": true, 00:11:10.808 "data_offset": 2048, 00:11:10.808 "data_size": 63488 00:11:10.808 }, 00:11:10.808 { 00:11:10.808 "name": null, 00:11:10.808 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:10.808 "is_configured": false, 00:11:10.808 "data_offset": 2048, 00:11:10.808 "data_size": 63488 00:11:10.808 } 00:11:10.808 ] 00:11:10.808 }' 00:11:10.808 04:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.808 04:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.386 04:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:11.386 04:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:11.386 04:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:11:11.386 04:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:11.386 04:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.386 04:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.386 [2024-12-13 04:27:11.154212] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:11.386 [2024-12-13 04:27:11.154373] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.386 [2024-12-13 04:27:11.154402] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:11.386 [2024-12-13 04:27:11.154415] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.386 [2024-12-13 04:27:11.154929] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.386 [2024-12-13 04:27:11.154953] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:11.386 [2024-12-13 04:27:11.155040] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:11.386 [2024-12-13 04:27:11.155070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:11.386 [2024-12-13 04:27:11.155185] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:11:11.386 [2024-12-13 04:27:11.155197] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:11.386 [2024-12-13 04:27:11.155501] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:11:11.386 [2024-12-13 04:27:11.155644] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:11:11.386 [2024-12-13 04:27:11.155658] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:11:11.386 [2024-12-13 04:27:11.155783] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:11.386 pt4 00:11:11.386 04:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.386 04:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:11.386 04:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:11.386 04:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:11.386 04:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:11.386 04:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:11.386 04:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:11.386 04:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.386 04:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.386 04:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.386 04:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.386 04:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.386 04:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:11.386 04:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.386 04:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.386 04:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.386 04:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.386 "name": "raid_bdev1", 00:11:11.386 "uuid": "82f2ed7f-7dfb-496e-81a3-41e8466a8bf9", 00:11:11.386 "strip_size_kb": 0, 00:11:11.386 "state": "online", 00:11:11.386 "raid_level": "raid1", 00:11:11.386 "superblock": true, 00:11:11.386 "num_base_bdevs": 4, 00:11:11.386 "num_base_bdevs_discovered": 3, 00:11:11.386 "num_base_bdevs_operational": 3, 00:11:11.386 "base_bdevs_list": [ 00:11:11.386 { 00:11:11.386 "name": null, 00:11:11.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.386 "is_configured": false, 00:11:11.386 "data_offset": 2048, 00:11:11.386 "data_size": 63488 00:11:11.386 }, 00:11:11.386 { 00:11:11.386 "name": "pt2", 00:11:11.386 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:11.386 "is_configured": true, 00:11:11.386 "data_offset": 2048, 00:11:11.386 "data_size": 63488 00:11:11.386 }, 00:11:11.386 { 00:11:11.386 "name": "pt3", 00:11:11.386 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:11.386 "is_configured": true, 00:11:11.386 "data_offset": 2048, 00:11:11.386 "data_size": 63488 00:11:11.386 }, 00:11:11.386 { 00:11:11.386 "name": "pt4", 00:11:11.387 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:11.387 "is_configured": true, 00:11:11.387 "data_offset": 2048, 00:11:11.387 "data_size": 63488 00:11:11.387 } 00:11:11.387 ] 00:11:11.387 }' 00:11:11.387 04:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.387 04:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.704 04:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:11.704 04:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.704 04:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.704 [2024-12-13 04:27:11.657297] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:11.704 [2024-12-13 04:27:11.657328] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:11.704 [2024-12-13 04:27:11.657399] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:11.704 [2024-12-13 04:27:11.657494] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:11.704 [2024-12-13 04:27:11.657504] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:11:11.704 04:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.704 04:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.705 04:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.705 04:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:11.705 04:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.705 04:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.705 04:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:11.705 04:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:11.705 04:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:11:11.705 04:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:11:11.705 04:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:11:11.705 04:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.705 04:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.965 04:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.965 04:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:11.965 04:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.965 04:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.965 [2024-12-13 04:27:11.729198] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:11.965 [2024-12-13 04:27:11.729251] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.965 [2024-12-13 04:27:11.729269] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:11.965 [2024-12-13 04:27:11.729279] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.965 [2024-12-13 04:27:11.731721] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.965 [2024-12-13 04:27:11.731757] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:11.965 [2024-12-13 04:27:11.731827] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:11.965 [2024-12-13 04:27:11.731863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:11.965 [2024-12-13 04:27:11.731971] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:11.965 [2024-12-13 04:27:11.731985] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:11.965 [2024-12-13 04:27:11.732001] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:11:11.965 [2024-12-13 04:27:11.732029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:11.965 [2024-12-13 04:27:11.732127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:11.965 pt1 00:11:11.965 04:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.965 04:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:11:11.965 04:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:11.965 04:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:11.965 04:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.965 04:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:11.965 04:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:11.965 04:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:11.965 04:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.965 04:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.965 04:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.965 04:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.965 04:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:11.965 04:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.965 04:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.965 04:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.965 04:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.965 04:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.965 "name": "raid_bdev1", 00:11:11.965 "uuid": "82f2ed7f-7dfb-496e-81a3-41e8466a8bf9", 00:11:11.965 "strip_size_kb": 0, 00:11:11.965 "state": "configuring", 00:11:11.965 "raid_level": "raid1", 00:11:11.965 "superblock": true, 00:11:11.965 "num_base_bdevs": 4, 00:11:11.965 "num_base_bdevs_discovered": 2, 00:11:11.965 "num_base_bdevs_operational": 3, 00:11:11.965 "base_bdevs_list": [ 00:11:11.965 { 00:11:11.965 "name": null, 00:11:11.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.965 "is_configured": false, 00:11:11.965 "data_offset": 2048, 00:11:11.965 "data_size": 63488 00:11:11.965 }, 00:11:11.965 { 00:11:11.965 "name": "pt2", 00:11:11.965 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:11.965 "is_configured": true, 00:11:11.965 "data_offset": 2048, 00:11:11.965 "data_size": 63488 00:11:11.965 }, 00:11:11.965 { 00:11:11.965 "name": "pt3", 00:11:11.965 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:11.965 "is_configured": true, 00:11:11.965 "data_offset": 2048, 00:11:11.965 "data_size": 63488 00:11:11.965 }, 00:11:11.965 { 00:11:11.965 "name": null, 00:11:11.965 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:11.965 "is_configured": false, 00:11:11.965 "data_offset": 2048, 00:11:11.965 "data_size": 63488 00:11:11.965 } 00:11:11.965 ] 00:11:11.965 }' 00:11:11.965 04:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.965 04:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.225 04:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:11:12.225 04:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.225 04:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:12.225 04:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.225 04:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.225 04:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:11:12.225 04:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:12.225 04:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.225 04:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.225 [2024-12-13 04:27:12.188515] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:12.225 [2024-12-13 04:27:12.188659] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.225 [2024-12-13 04:27:12.188702] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:11:12.225 [2024-12-13 04:27:12.188736] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.225 [2024-12-13 04:27:12.189269] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.225 [2024-12-13 04:27:12.189335] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:12.225 [2024-12-13 04:27:12.189463] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:12.225 [2024-12-13 04:27:12.189526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:12.225 [2024-12-13 04:27:12.189680] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:11:12.225 [2024-12-13 04:27:12.189720] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:12.225 [2024-12-13 04:27:12.190020] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:11:12.225 [2024-12-13 04:27:12.190188] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:11:12.225 [2024-12-13 04:27:12.190229] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:11:12.225 [2024-12-13 04:27:12.190391] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:12.225 pt4 00:11:12.225 04:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.225 04:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:12.225 04:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:12.225 04:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:12.225 04:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:12.225 04:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:12.225 04:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:12.225 04:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.225 04:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.225 04:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.225 04:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.225 04:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:12.225 04:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.225 04:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.225 04:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.225 04:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.225 04:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.225 "name": "raid_bdev1", 00:11:12.226 "uuid": "82f2ed7f-7dfb-496e-81a3-41e8466a8bf9", 00:11:12.226 "strip_size_kb": 0, 00:11:12.226 "state": "online", 00:11:12.226 "raid_level": "raid1", 00:11:12.226 "superblock": true, 00:11:12.226 "num_base_bdevs": 4, 00:11:12.226 "num_base_bdevs_discovered": 3, 00:11:12.226 "num_base_bdevs_operational": 3, 00:11:12.226 "base_bdevs_list": [ 00:11:12.226 { 00:11:12.226 "name": null, 00:11:12.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.226 "is_configured": false, 00:11:12.226 "data_offset": 2048, 00:11:12.226 "data_size": 63488 00:11:12.226 }, 00:11:12.226 { 00:11:12.226 "name": "pt2", 00:11:12.226 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:12.226 "is_configured": true, 00:11:12.226 "data_offset": 2048, 00:11:12.226 "data_size": 63488 00:11:12.226 }, 00:11:12.226 { 00:11:12.226 "name": "pt3", 00:11:12.226 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:12.226 "is_configured": true, 00:11:12.226 "data_offset": 2048, 00:11:12.226 "data_size": 63488 00:11:12.226 }, 00:11:12.226 { 00:11:12.226 "name": "pt4", 00:11:12.226 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:12.226 "is_configured": true, 00:11:12.226 "data_offset": 2048, 00:11:12.226 "data_size": 63488 00:11:12.226 } 00:11:12.226 ] 00:11:12.226 }' 00:11:12.226 04:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.226 04:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.796 04:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:12.796 04:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:12.796 04:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.796 04:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.796 04:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.796 04:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:12.796 04:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:12.796 04:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:12.796 04:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.796 04:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.796 [2024-12-13 04:27:12.671963] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:12.796 04:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.796 04:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 82f2ed7f-7dfb-496e-81a3-41e8466a8bf9 '!=' 82f2ed7f-7dfb-496e-81a3-41e8466a8bf9 ']' 00:11:12.796 04:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 86978 00:11:12.796 04:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 86978 ']' 00:11:12.796 04:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 86978 00:11:12.796 04:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:12.796 04:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:12.796 04:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86978 00:11:12.796 04:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:12.796 04:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:12.796 04:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86978' 00:11:12.796 killing process with pid 86978 00:11:12.796 04:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 86978 00:11:12.796 [2024-12-13 04:27:12.739893] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:12.796 [2024-12-13 04:27:12.739998] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:12.796 04:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 86978 00:11:12.796 [2024-12-13 04:27:12.740095] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:12.796 [2024-12-13 04:27:12.740106] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:11:13.056 [2024-12-13 04:27:12.819513] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:13.316 ************************************ 00:11:13.316 END TEST raid_superblock_test 00:11:13.316 ************************************ 00:11:13.316 04:27:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:13.316 00:11:13.316 real 0m7.063s 00:11:13.316 user 0m11.677s 00:11:13.316 sys 0m1.623s 00:11:13.316 04:27:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:13.316 04:27:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.316 04:27:13 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:11:13.316 04:27:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:13.316 04:27:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:13.316 04:27:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:13.316 ************************************ 00:11:13.316 START TEST raid_read_error_test 00:11:13.316 ************************************ 00:11:13.316 04:27:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:11:13.316 04:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:13.316 04:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:13.316 04:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:13.316 04:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:13.316 04:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:13.316 04:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:13.316 04:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:13.316 04:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:13.316 04:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:13.316 04:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:13.316 04:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:13.316 04:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:13.316 04:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:13.316 04:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:13.316 04:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:13.316 04:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:13.316 04:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:13.316 04:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:13.316 04:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:13.316 04:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:13.316 04:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:13.316 04:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:13.316 04:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:13.316 04:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:13.316 04:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:13.316 04:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:13.316 04:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:13.316 04:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.6L6Q5E45iI 00:11:13.316 04:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=87454 00:11:13.316 04:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:13.316 04:27:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 87454 00:11:13.316 04:27:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 87454 ']' 00:11:13.316 04:27:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.316 04:27:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:13.316 04:27:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.316 04:27:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:13.316 04:27:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.316 [2024-12-13 04:27:13.327382] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:11:13.316 [2024-12-13 04:27:13.327649] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87454 ] 00:11:13.576 [2024-12-13 04:27:13.483385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.576 [2024-12-13 04:27:13.523416] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.836 [2024-12-13 04:27:13.599135] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:13.836 [2024-12-13 04:27:13.599180] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.407 BaseBdev1_malloc 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.407 true 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.407 [2024-12-13 04:27:14.188582] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:14.407 [2024-12-13 04:27:14.188711] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.407 [2024-12-13 04:27:14.188742] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:11:14.407 [2024-12-13 04:27:14.188760] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.407 [2024-12-13 04:27:14.191206] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.407 [2024-12-13 04:27:14.191247] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:14.407 BaseBdev1 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.407 BaseBdev2_malloc 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.407 true 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.407 [2024-12-13 04:27:14.227043] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:14.407 [2024-12-13 04:27:14.227160] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.407 [2024-12-13 04:27:14.227185] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:11:14.407 [2024-12-13 04:27:14.227204] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.407 [2024-12-13 04:27:14.229578] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.407 [2024-12-13 04:27:14.229632] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:14.407 BaseBdev2 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.407 BaseBdev3_malloc 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.407 true 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.407 [2024-12-13 04:27:14.269646] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:14.407 [2024-12-13 04:27:14.269704] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.407 [2024-12-13 04:27:14.269726] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:11:14.407 [2024-12-13 04:27:14.269734] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.407 [2024-12-13 04:27:14.272146] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.407 [2024-12-13 04:27:14.272180] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:14.407 BaseBdev3 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.407 BaseBdev4_malloc 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.407 true 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.407 [2024-12-13 04:27:14.334244] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:14.407 [2024-12-13 04:27:14.334301] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.407 [2024-12-13 04:27:14.334330] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:14.407 [2024-12-13 04:27:14.334341] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.407 [2024-12-13 04:27:14.336912] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.407 [2024-12-13 04:27:14.337043] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:14.407 BaseBdev4 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.407 [2024-12-13 04:27:14.346227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:14.407 [2024-12-13 04:27:14.348334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:14.407 [2024-12-13 04:27:14.348486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:14.407 [2024-12-13 04:27:14.348548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:14.407 [2024-12-13 04:27:14.348774] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:11:14.407 [2024-12-13 04:27:14.348788] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:14.407 [2024-12-13 04:27:14.349045] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002ef0 00:11:14.407 [2024-12-13 04:27:14.349190] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:11:14.407 [2024-12-13 04:27:14.349204] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:11:14.407 [2024-12-13 04:27:14.349329] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.407 04:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.408 04:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.408 04:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.408 04:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.408 04:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.408 04:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:14.408 04:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.408 04:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.408 04:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.408 04:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.408 "name": "raid_bdev1", 00:11:14.408 "uuid": "370e6c74-07e5-4d2f-a6b6-091c500b0809", 00:11:14.408 "strip_size_kb": 0, 00:11:14.408 "state": "online", 00:11:14.408 "raid_level": "raid1", 00:11:14.408 "superblock": true, 00:11:14.408 "num_base_bdevs": 4, 00:11:14.408 "num_base_bdevs_discovered": 4, 00:11:14.408 "num_base_bdevs_operational": 4, 00:11:14.408 "base_bdevs_list": [ 00:11:14.408 { 00:11:14.408 "name": "BaseBdev1", 00:11:14.408 "uuid": "ed09e18d-d6d6-50b9-b738-1c0f92c35358", 00:11:14.408 "is_configured": true, 00:11:14.408 "data_offset": 2048, 00:11:14.408 "data_size": 63488 00:11:14.408 }, 00:11:14.408 { 00:11:14.408 "name": "BaseBdev2", 00:11:14.408 "uuid": "27937274-c50e-5463-bdb7-c23b3c29d729", 00:11:14.408 "is_configured": true, 00:11:14.408 "data_offset": 2048, 00:11:14.408 "data_size": 63488 00:11:14.408 }, 00:11:14.408 { 00:11:14.408 "name": "BaseBdev3", 00:11:14.408 "uuid": "e7a691b6-402a-57cd-9dd0-eb96cdd5f1a7", 00:11:14.408 "is_configured": true, 00:11:14.408 "data_offset": 2048, 00:11:14.408 "data_size": 63488 00:11:14.408 }, 00:11:14.408 { 00:11:14.408 "name": "BaseBdev4", 00:11:14.408 "uuid": "4a102724-ceca-54d5-8cda-8e826ef04476", 00:11:14.408 "is_configured": true, 00:11:14.408 "data_offset": 2048, 00:11:14.408 "data_size": 63488 00:11:14.408 } 00:11:14.408 ] 00:11:14.408 }' 00:11:14.408 04:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.408 04:27:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.977 04:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:14.977 04:27:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:14.977 [2024-12-13 04:27:14.853948] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000003090 00:11:15.916 04:27:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:15.916 04:27:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.916 04:27:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.916 04:27:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.916 04:27:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:15.916 04:27:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:15.916 04:27:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:15.916 04:27:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:15.916 04:27:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:15.916 04:27:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:15.916 04:27:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:15.916 04:27:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:15.916 04:27:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:15.916 04:27:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.916 04:27:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.916 04:27:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.916 04:27:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.916 04:27:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.916 04:27:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.916 04:27:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:15.916 04:27:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.916 04:27:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.916 04:27:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.916 04:27:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.916 "name": "raid_bdev1", 00:11:15.916 "uuid": "370e6c74-07e5-4d2f-a6b6-091c500b0809", 00:11:15.916 "strip_size_kb": 0, 00:11:15.916 "state": "online", 00:11:15.916 "raid_level": "raid1", 00:11:15.916 "superblock": true, 00:11:15.916 "num_base_bdevs": 4, 00:11:15.916 "num_base_bdevs_discovered": 4, 00:11:15.916 "num_base_bdevs_operational": 4, 00:11:15.916 "base_bdevs_list": [ 00:11:15.916 { 00:11:15.916 "name": "BaseBdev1", 00:11:15.916 "uuid": "ed09e18d-d6d6-50b9-b738-1c0f92c35358", 00:11:15.916 "is_configured": true, 00:11:15.916 "data_offset": 2048, 00:11:15.916 "data_size": 63488 00:11:15.916 }, 00:11:15.916 { 00:11:15.916 "name": "BaseBdev2", 00:11:15.916 "uuid": "27937274-c50e-5463-bdb7-c23b3c29d729", 00:11:15.916 "is_configured": true, 00:11:15.916 "data_offset": 2048, 00:11:15.916 "data_size": 63488 00:11:15.916 }, 00:11:15.916 { 00:11:15.916 "name": "BaseBdev3", 00:11:15.916 "uuid": "e7a691b6-402a-57cd-9dd0-eb96cdd5f1a7", 00:11:15.916 "is_configured": true, 00:11:15.916 "data_offset": 2048, 00:11:15.916 "data_size": 63488 00:11:15.916 }, 00:11:15.916 { 00:11:15.916 "name": "BaseBdev4", 00:11:15.916 "uuid": "4a102724-ceca-54d5-8cda-8e826ef04476", 00:11:15.916 "is_configured": true, 00:11:15.916 "data_offset": 2048, 00:11:15.916 "data_size": 63488 00:11:15.916 } 00:11:15.916 ] 00:11:15.916 }' 00:11:15.916 04:27:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.916 04:27:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.486 04:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:16.486 04:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.486 04:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.486 [2024-12-13 04:27:16.198206] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:16.486 [2024-12-13 04:27:16.198331] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:16.486 [2024-12-13 04:27:16.201050] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:16.486 [2024-12-13 04:27:16.201148] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:16.486 [2024-12-13 04:27:16.201323] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:16.486 [2024-12-13 04:27:16.201380] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:11:16.486 { 00:11:16.486 "results": [ 00:11:16.486 { 00:11:16.486 "job": "raid_bdev1", 00:11:16.486 "core_mask": "0x1", 00:11:16.486 "workload": "randrw", 00:11:16.486 "percentage": 50, 00:11:16.486 "status": "finished", 00:11:16.486 "queue_depth": 1, 00:11:16.486 "io_size": 131072, 00:11:16.486 "runtime": 1.344725, 00:11:16.486 "iops": 8490.955399802933, 00:11:16.486 "mibps": 1061.3694249753667, 00:11:16.486 "io_failed": 0, 00:11:16.486 "io_timeout": 0, 00:11:16.486 "avg_latency_us": 115.1523387954819, 00:11:16.486 "min_latency_us": 23.252401746724892, 00:11:16.486 "max_latency_us": 1516.7720524017468 00:11:16.486 } 00:11:16.486 ], 00:11:16.486 "core_count": 1 00:11:16.486 } 00:11:16.486 04:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.486 04:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 87454 00:11:16.486 04:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 87454 ']' 00:11:16.486 04:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 87454 00:11:16.486 04:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:16.486 04:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:16.486 04:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87454 00:11:16.486 04:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:16.486 killing process with pid 87454 00:11:16.486 04:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:16.486 04:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87454' 00:11:16.486 04:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 87454 00:11:16.486 [2024-12-13 04:27:16.248153] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:16.486 04:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 87454 00:11:16.486 [2024-12-13 04:27:16.316630] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:16.746 04:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:16.746 04:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.6L6Q5E45iI 00:11:16.746 04:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:16.746 04:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:16.746 04:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:16.746 ************************************ 00:11:16.746 END TEST raid_read_error_test 00:11:16.746 ************************************ 00:11:16.746 04:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:16.746 04:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:16.746 04:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:16.746 00:11:16.746 real 0m3.428s 00:11:16.746 user 0m4.138s 00:11:16.746 sys 0m0.656s 00:11:16.746 04:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.746 04:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.746 04:27:16 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:11:16.746 04:27:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:16.746 04:27:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.746 04:27:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:16.746 ************************************ 00:11:16.746 START TEST raid_write_error_test 00:11:16.746 ************************************ 00:11:16.746 04:27:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:11:16.746 04:27:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:16.746 04:27:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:16.746 04:27:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:16.746 04:27:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:16.746 04:27:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:16.746 04:27:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:16.746 04:27:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:16.746 04:27:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:16.746 04:27:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:16.746 04:27:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:16.746 04:27:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:16.746 04:27:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:16.746 04:27:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:16.746 04:27:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:16.746 04:27:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:16.746 04:27:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:16.746 04:27:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:16.746 04:27:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:16.746 04:27:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:16.746 04:27:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:16.746 04:27:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:16.746 04:27:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:16.746 04:27:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:16.746 04:27:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:16.746 04:27:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:16.746 04:27:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:16.746 04:27:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:16.746 04:27:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.aqW1nFsXR1 00:11:16.746 04:27:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=87583 00:11:16.746 04:27:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:16.746 04:27:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 87583 00:11:16.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.746 04:27:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 87583 ']' 00:11:16.746 04:27:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.746 04:27:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:16.746 04:27:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.746 04:27:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:16.746 04:27:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.006 [2024-12-13 04:27:16.828576] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:11:17.006 [2024-12-13 04:27:16.828709] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87583 ] 00:11:17.006 [2024-12-13 04:27:16.984290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.266 [2024-12-13 04:27:17.022831] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.266 [2024-12-13 04:27:17.099384] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:17.266 [2024-12-13 04:27:17.099426] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:17.837 04:27:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:17.837 04:27:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:17.837 04:27:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:17.837 04:27:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:17.837 04:27:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.837 04:27:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.837 BaseBdev1_malloc 00:11:17.837 04:27:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.837 04:27:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:17.837 04:27:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.837 04:27:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.837 true 00:11:17.837 04:27:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.837 04:27:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:17.837 04:27:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.837 04:27:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.837 [2024-12-13 04:27:17.684521] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:17.837 [2024-12-13 04:27:17.684596] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.837 [2024-12-13 04:27:17.684625] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:11:17.837 [2024-12-13 04:27:17.684635] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.837 [2024-12-13 04:27:17.687155] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.837 [2024-12-13 04:27:17.687193] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:17.837 BaseBdev1 00:11:17.837 04:27:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.837 04:27:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:17.837 04:27:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:17.837 04:27:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.837 04:27:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.837 BaseBdev2_malloc 00:11:17.837 04:27:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.837 04:27:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:17.837 04:27:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.837 04:27:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.837 true 00:11:17.837 04:27:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.837 04:27:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:17.837 04:27:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.837 04:27:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.837 [2024-12-13 04:27:17.731290] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:17.837 [2024-12-13 04:27:17.731343] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.837 [2024-12-13 04:27:17.731367] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:11:17.837 [2024-12-13 04:27:17.731385] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.837 [2024-12-13 04:27:17.733878] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.837 [2024-12-13 04:27:17.733987] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:17.837 BaseBdev2 00:11:17.837 04:27:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.837 04:27:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:17.837 04:27:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:17.837 04:27:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.837 04:27:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.837 BaseBdev3_malloc 00:11:17.837 04:27:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.837 04:27:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:17.837 04:27:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.837 04:27:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.837 true 00:11:17.837 04:27:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.837 04:27:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:17.837 04:27:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.837 04:27:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.837 [2024-12-13 04:27:17.778008] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:17.837 [2024-12-13 04:27:17.778057] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.837 [2024-12-13 04:27:17.778081] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:11:17.837 [2024-12-13 04:27:17.778090] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.837 [2024-12-13 04:27:17.780491] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.837 [2024-12-13 04:27:17.780524] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:17.837 BaseBdev3 00:11:17.837 04:27:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.838 04:27:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:17.838 04:27:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:17.838 04:27:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.838 04:27:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.838 BaseBdev4_malloc 00:11:17.838 04:27:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.838 04:27:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:17.838 04:27:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.838 04:27:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.838 true 00:11:17.838 04:27:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.838 04:27:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:17.838 04:27:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.838 04:27:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.838 [2024-12-13 04:27:17.839017] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:17.838 [2024-12-13 04:27:17.839077] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.838 [2024-12-13 04:27:17.839110] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:17.838 [2024-12-13 04:27:17.839121] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.838 [2024-12-13 04:27:17.841654] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.838 [2024-12-13 04:27:17.841762] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:17.838 BaseBdev4 00:11:17.838 04:27:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.838 04:27:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:17.838 04:27:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.838 04:27:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.838 [2024-12-13 04:27:17.851002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:18.098 [2024-12-13 04:27:17.853182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:18.098 [2024-12-13 04:27:17.853329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:18.098 [2024-12-13 04:27:17.853395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:18.098 [2024-12-13 04:27:17.853653] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:11:18.098 [2024-12-13 04:27:17.853667] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:18.098 [2024-12-13 04:27:17.853925] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002ef0 00:11:18.098 [2024-12-13 04:27:17.854096] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:11:18.098 [2024-12-13 04:27:17.854111] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:11:18.098 [2024-12-13 04:27:17.854239] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:18.098 04:27:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.098 04:27:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:18.098 04:27:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:18.098 04:27:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:18.098 04:27:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:18.098 04:27:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:18.098 04:27:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.098 04:27:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.098 04:27:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.098 04:27:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.098 04:27:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.098 04:27:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.098 04:27:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.098 04:27:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.098 04:27:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:18.098 04:27:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.098 04:27:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.098 "name": "raid_bdev1", 00:11:18.098 "uuid": "642fb825-12ee-499f-ba5a-6c7afa8c690d", 00:11:18.098 "strip_size_kb": 0, 00:11:18.098 "state": "online", 00:11:18.098 "raid_level": "raid1", 00:11:18.098 "superblock": true, 00:11:18.098 "num_base_bdevs": 4, 00:11:18.098 "num_base_bdevs_discovered": 4, 00:11:18.098 "num_base_bdevs_operational": 4, 00:11:18.098 "base_bdevs_list": [ 00:11:18.098 { 00:11:18.098 "name": "BaseBdev1", 00:11:18.098 "uuid": "1ddec23f-a940-503d-852b-e046bbe68866", 00:11:18.098 "is_configured": true, 00:11:18.098 "data_offset": 2048, 00:11:18.098 "data_size": 63488 00:11:18.098 }, 00:11:18.098 { 00:11:18.098 "name": "BaseBdev2", 00:11:18.098 "uuid": "69e759f4-718c-5db7-b46e-db898bba6a82", 00:11:18.098 "is_configured": true, 00:11:18.098 "data_offset": 2048, 00:11:18.098 "data_size": 63488 00:11:18.098 }, 00:11:18.098 { 00:11:18.098 "name": "BaseBdev3", 00:11:18.098 "uuid": "80228577-9ec3-50d6-b612-d131ac27b267", 00:11:18.098 "is_configured": true, 00:11:18.098 "data_offset": 2048, 00:11:18.098 "data_size": 63488 00:11:18.098 }, 00:11:18.098 { 00:11:18.098 "name": "BaseBdev4", 00:11:18.098 "uuid": "60d405d5-c69c-52a9-adb3-d726cf9aa169", 00:11:18.098 "is_configured": true, 00:11:18.098 "data_offset": 2048, 00:11:18.098 "data_size": 63488 00:11:18.098 } 00:11:18.098 ] 00:11:18.098 }' 00:11:18.099 04:27:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.099 04:27:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.358 04:27:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:18.358 04:27:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:18.358 [2024-12-13 04:27:18.362634] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000003090 00:11:19.298 04:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:19.298 04:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.298 04:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.298 [2024-12-13 04:27:19.287677] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:19.298 [2024-12-13 04:27:19.287854] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:19.298 [2024-12-13 04:27:19.288159] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000003090 00:11:19.298 04:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.298 04:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:19.298 04:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:19.298 04:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:11:19.298 04:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:11:19.298 04:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:19.298 04:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:19.298 04:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:19.298 04:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:19.298 04:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:19.298 04:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:19.298 04:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.298 04:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.298 04:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.298 04:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.298 04:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.298 04:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.298 04:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.298 04:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.557 04:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.557 04:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.557 "name": "raid_bdev1", 00:11:19.557 "uuid": "642fb825-12ee-499f-ba5a-6c7afa8c690d", 00:11:19.557 "strip_size_kb": 0, 00:11:19.557 "state": "online", 00:11:19.558 "raid_level": "raid1", 00:11:19.558 "superblock": true, 00:11:19.558 "num_base_bdevs": 4, 00:11:19.558 "num_base_bdevs_discovered": 3, 00:11:19.558 "num_base_bdevs_operational": 3, 00:11:19.558 "base_bdevs_list": [ 00:11:19.558 { 00:11:19.558 "name": null, 00:11:19.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.558 "is_configured": false, 00:11:19.558 "data_offset": 0, 00:11:19.558 "data_size": 63488 00:11:19.558 }, 00:11:19.558 { 00:11:19.558 "name": "BaseBdev2", 00:11:19.558 "uuid": "69e759f4-718c-5db7-b46e-db898bba6a82", 00:11:19.558 "is_configured": true, 00:11:19.558 "data_offset": 2048, 00:11:19.558 "data_size": 63488 00:11:19.558 }, 00:11:19.558 { 00:11:19.558 "name": "BaseBdev3", 00:11:19.558 "uuid": "80228577-9ec3-50d6-b612-d131ac27b267", 00:11:19.558 "is_configured": true, 00:11:19.558 "data_offset": 2048, 00:11:19.558 "data_size": 63488 00:11:19.558 }, 00:11:19.558 { 00:11:19.558 "name": "BaseBdev4", 00:11:19.558 "uuid": "60d405d5-c69c-52a9-adb3-d726cf9aa169", 00:11:19.558 "is_configured": true, 00:11:19.558 "data_offset": 2048, 00:11:19.558 "data_size": 63488 00:11:19.558 } 00:11:19.558 ] 00:11:19.558 }' 00:11:19.558 04:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.558 04:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.818 04:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:19.818 04:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.818 04:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.818 [2024-12-13 04:27:19.786760] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:19.818 [2024-12-13 04:27:19.786827] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:19.818 [2024-12-13 04:27:19.789318] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:19.818 [2024-12-13 04:27:19.789384] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:19.818 [2024-12-13 04:27:19.789504] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:19.818 [2024-12-13 04:27:19.789518] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:11:19.818 { 00:11:19.818 "results": [ 00:11:19.818 { 00:11:19.818 "job": "raid_bdev1", 00:11:19.818 "core_mask": "0x1", 00:11:19.818 "workload": "randrw", 00:11:19.818 "percentage": 50, 00:11:19.818 "status": "finished", 00:11:19.818 "queue_depth": 1, 00:11:19.818 "io_size": 131072, 00:11:19.818 "runtime": 1.424749, 00:11:19.818 "iops": 9455.700618143968, 00:11:19.818 "mibps": 1181.962577267996, 00:11:19.818 "io_failed": 0, 00:11:19.818 "io_timeout": 0, 00:11:19.818 "avg_latency_us": 103.15509547863789, 00:11:19.818 "min_latency_us": 23.252401746724892, 00:11:19.818 "max_latency_us": 1366.5257641921398 00:11:19.818 } 00:11:19.818 ], 00:11:19.818 "core_count": 1 00:11:19.818 } 00:11:19.818 04:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.818 04:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 87583 00:11:19.818 04:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 87583 ']' 00:11:19.818 04:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 87583 00:11:19.818 04:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:19.818 04:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:19.818 04:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87583 00:11:19.818 04:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:19.818 killing process with pid 87583 00:11:19.818 04:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:19.818 04:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87583' 00:11:19.818 04:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 87583 00:11:19.818 [2024-12-13 04:27:19.825711] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:19.818 04:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 87583 00:11:20.077 [2024-12-13 04:27:19.893523] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:20.337 04:27:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:20.337 04:27:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.aqW1nFsXR1 00:11:20.337 04:27:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:20.337 04:27:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:20.337 04:27:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:20.337 04:27:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:20.337 04:27:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:20.337 04:27:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:20.337 00:11:20.337 real 0m3.505s 00:11:20.337 user 0m4.266s 00:11:20.337 sys 0m0.642s 00:11:20.337 04:27:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:20.337 ************************************ 00:11:20.337 END TEST raid_write_error_test 00:11:20.337 ************************************ 00:11:20.337 04:27:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.337 04:27:20 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:11:20.337 04:27:20 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:11:20.337 04:27:20 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:11:20.337 04:27:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:20.337 04:27:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:20.337 04:27:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:20.337 ************************************ 00:11:20.337 START TEST raid_rebuild_test 00:11:20.337 ************************************ 00:11:20.337 04:27:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:11:20.337 04:27:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:20.337 04:27:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:20.337 04:27:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:20.337 04:27:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:20.337 04:27:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:20.337 04:27:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:20.337 04:27:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:20.337 04:27:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:20.337 04:27:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:20.337 04:27:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:20.337 04:27:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:20.337 04:27:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:20.337 04:27:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:20.337 04:27:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:20.337 04:27:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:20.337 04:27:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:20.337 04:27:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:20.337 04:27:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:20.337 04:27:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:20.337 04:27:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:20.337 04:27:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:20.337 04:27:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:20.337 04:27:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:20.337 04:27:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=87716 00:11:20.337 04:27:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:20.337 04:27:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 87716 00:11:20.337 04:27:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 87716 ']' 00:11:20.337 04:27:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:20.337 04:27:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:20.337 04:27:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:20.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:20.337 04:27:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:20.337 04:27:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.596 [2024-12-13 04:27:20.401856] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:11:20.596 [2024-12-13 04:27:20.402057] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87716 ] 00:11:20.596 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:20.596 Zero copy mechanism will not be used. 00:11:20.596 [2024-12-13 04:27:20.557925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.596 [2024-12-13 04:27:20.596306] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.855 [2024-12-13 04:27:20.672114] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:20.855 [2024-12-13 04:27:20.672149] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.424 BaseBdev1_malloc 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.424 [2024-12-13 04:27:21.257181] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:21.424 [2024-12-13 04:27:21.257248] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.424 [2024-12-13 04:27:21.257276] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:11:21.424 [2024-12-13 04:27:21.257297] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.424 [2024-12-13 04:27:21.259801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.424 [2024-12-13 04:27:21.259836] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:21.424 BaseBdev1 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.424 BaseBdev2_malloc 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.424 [2024-12-13 04:27:21.291557] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:21.424 [2024-12-13 04:27:21.291609] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.424 [2024-12-13 04:27:21.291633] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:21.424 [2024-12-13 04:27:21.291642] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.424 [2024-12-13 04:27:21.294039] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.424 [2024-12-13 04:27:21.294078] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:21.424 BaseBdev2 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.424 spare_malloc 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.424 spare_delay 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.424 [2024-12-13 04:27:21.337885] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:21.424 [2024-12-13 04:27:21.337932] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.424 [2024-12-13 04:27:21.337952] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:21.424 [2024-12-13 04:27:21.337961] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.424 [2024-12-13 04:27:21.340324] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.424 [2024-12-13 04:27:21.340359] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:21.424 spare 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.424 [2024-12-13 04:27:21.349895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:21.424 [2024-12-13 04:27:21.352008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:21.424 [2024-12-13 04:27:21.352100] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:11:21.424 [2024-12-13 04:27:21.352111] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:21.424 [2024-12-13 04:27:21.352436] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:11:21.424 [2024-12-13 04:27:21.352593] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:11:21.424 [2024-12-13 04:27:21.352612] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:11:21.424 [2024-12-13 04:27:21.352752] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.424 04:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.424 "name": "raid_bdev1", 00:11:21.424 "uuid": "1a8ed432-b2c4-4ea6-a873-b8f731047626", 00:11:21.424 "strip_size_kb": 0, 00:11:21.424 "state": "online", 00:11:21.424 "raid_level": "raid1", 00:11:21.424 "superblock": false, 00:11:21.424 "num_base_bdevs": 2, 00:11:21.424 "num_base_bdevs_discovered": 2, 00:11:21.424 "num_base_bdevs_operational": 2, 00:11:21.424 "base_bdevs_list": [ 00:11:21.424 { 00:11:21.424 "name": "BaseBdev1", 00:11:21.424 "uuid": "586b4583-82fc-51e2-a373-1c3375a847e8", 00:11:21.424 "is_configured": true, 00:11:21.424 "data_offset": 0, 00:11:21.424 "data_size": 65536 00:11:21.424 }, 00:11:21.424 { 00:11:21.425 "name": "BaseBdev2", 00:11:21.425 "uuid": "3fa5a475-1fc1-5f02-954e-c884cd5710ef", 00:11:21.425 "is_configured": true, 00:11:21.425 "data_offset": 0, 00:11:21.425 "data_size": 65536 00:11:21.425 } 00:11:21.425 ] 00:11:21.425 }' 00:11:21.425 04:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.425 04:27:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.993 04:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:21.993 04:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:21.994 04:27:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.994 04:27:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.994 [2024-12-13 04:27:21.849314] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:21.994 04:27:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.994 04:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:21.994 04:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:21.994 04:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.994 04:27:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.994 04:27:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.994 04:27:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.994 04:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:21.994 04:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:21.994 04:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:21.994 04:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:21.994 04:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:21.994 04:27:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:21.994 04:27:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:21.994 04:27:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:21.994 04:27:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:21.994 04:27:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:21.994 04:27:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:21.994 04:27:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:21.994 04:27:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:21.994 04:27:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:22.253 [2024-12-13 04:27:22.104706] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:11:22.253 /dev/nbd0 00:11:22.253 04:27:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:22.253 04:27:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:22.253 04:27:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:22.253 04:27:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:11:22.253 04:27:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:22.253 04:27:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:22.253 04:27:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:22.253 04:27:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:11:22.253 04:27:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:22.253 04:27:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:22.253 04:27:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:22.253 1+0 records in 00:11:22.253 1+0 records out 00:11:22.253 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000429416 s, 9.5 MB/s 00:11:22.253 04:27:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:22.253 04:27:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:11:22.253 04:27:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:22.253 04:27:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:22.253 04:27:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:11:22.253 04:27:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:22.253 04:27:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:22.253 04:27:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:22.253 04:27:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:22.253 04:27:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:11:26.447 65536+0 records in 00:11:26.447 65536+0 records out 00:11:26.447 33554432 bytes (34 MB, 32 MiB) copied, 4.23053 s, 7.9 MB/s 00:11:26.447 04:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:26.447 04:27:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:26.447 04:27:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:26.447 04:27:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:26.447 04:27:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:26.447 04:27:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:26.447 04:27:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:26.706 04:27:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:26.706 [2024-12-13 04:27:26.663860] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:26.706 04:27:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:26.706 04:27:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:26.706 04:27:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:26.706 04:27:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:26.706 04:27:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:26.706 04:27:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:26.706 04:27:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:26.706 04:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:26.706 04:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.706 04:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.706 [2024-12-13 04:27:26.681226] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:26.706 04:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.706 04:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:26.706 04:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:26.706 04:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:26.706 04:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.706 04:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.706 04:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:26.706 04:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.706 04:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.706 04:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.706 04:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.706 04:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.706 04:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:26.706 04:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.706 04:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.706 04:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.965 04:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.965 "name": "raid_bdev1", 00:11:26.965 "uuid": "1a8ed432-b2c4-4ea6-a873-b8f731047626", 00:11:26.965 "strip_size_kb": 0, 00:11:26.965 "state": "online", 00:11:26.965 "raid_level": "raid1", 00:11:26.965 "superblock": false, 00:11:26.965 "num_base_bdevs": 2, 00:11:26.965 "num_base_bdevs_discovered": 1, 00:11:26.965 "num_base_bdevs_operational": 1, 00:11:26.965 "base_bdevs_list": [ 00:11:26.965 { 00:11:26.965 "name": null, 00:11:26.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.965 "is_configured": false, 00:11:26.965 "data_offset": 0, 00:11:26.965 "data_size": 65536 00:11:26.965 }, 00:11:26.965 { 00:11:26.965 "name": "BaseBdev2", 00:11:26.965 "uuid": "3fa5a475-1fc1-5f02-954e-c884cd5710ef", 00:11:26.965 "is_configured": true, 00:11:26.965 "data_offset": 0, 00:11:26.965 "data_size": 65536 00:11:26.965 } 00:11:26.965 ] 00:11:26.965 }' 00:11:26.965 04:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.965 04:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.224 04:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:27.224 04:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.224 04:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.224 [2024-12-13 04:27:27.124510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:27.224 [2024-12-13 04:27:27.133231] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d06220 00:11:27.224 04:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.224 04:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:27.224 [2024-12-13 04:27:27.135477] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:28.159 04:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:28.159 04:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:28.159 04:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:28.159 04:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:28.159 04:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:28.159 04:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.159 04:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.159 04:27:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.159 04:27:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.159 04:27:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.418 04:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:28.418 "name": "raid_bdev1", 00:11:28.418 "uuid": "1a8ed432-b2c4-4ea6-a873-b8f731047626", 00:11:28.418 "strip_size_kb": 0, 00:11:28.418 "state": "online", 00:11:28.418 "raid_level": "raid1", 00:11:28.418 "superblock": false, 00:11:28.418 "num_base_bdevs": 2, 00:11:28.418 "num_base_bdevs_discovered": 2, 00:11:28.418 "num_base_bdevs_operational": 2, 00:11:28.418 "process": { 00:11:28.418 "type": "rebuild", 00:11:28.418 "target": "spare", 00:11:28.418 "progress": { 00:11:28.418 "blocks": 20480, 00:11:28.418 "percent": 31 00:11:28.418 } 00:11:28.418 }, 00:11:28.418 "base_bdevs_list": [ 00:11:28.418 { 00:11:28.418 "name": "spare", 00:11:28.418 "uuid": "907f5030-8a8e-5ae3-860b-083b7629f2fc", 00:11:28.418 "is_configured": true, 00:11:28.418 "data_offset": 0, 00:11:28.418 "data_size": 65536 00:11:28.418 }, 00:11:28.418 { 00:11:28.418 "name": "BaseBdev2", 00:11:28.418 "uuid": "3fa5a475-1fc1-5f02-954e-c884cd5710ef", 00:11:28.418 "is_configured": true, 00:11:28.418 "data_offset": 0, 00:11:28.418 "data_size": 65536 00:11:28.418 } 00:11:28.418 ] 00:11:28.418 }' 00:11:28.418 04:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:28.418 04:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:28.418 04:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:28.418 04:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:28.418 04:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:28.418 04:27:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.418 04:27:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.418 [2024-12-13 04:27:28.276645] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:28.418 [2024-12-13 04:27:28.343750] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:28.418 [2024-12-13 04:27:28.343890] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:28.418 [2024-12-13 04:27:28.343934] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:28.418 [2024-12-13 04:27:28.343956] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:28.418 04:27:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.418 04:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:28.418 04:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:28.418 04:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:28.418 04:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.418 04:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.418 04:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:28.418 04:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.418 04:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.418 04:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.418 04:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.418 04:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.418 04:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.418 04:27:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.418 04:27:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.418 04:27:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.418 04:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.418 "name": "raid_bdev1", 00:11:28.418 "uuid": "1a8ed432-b2c4-4ea6-a873-b8f731047626", 00:11:28.419 "strip_size_kb": 0, 00:11:28.419 "state": "online", 00:11:28.419 "raid_level": "raid1", 00:11:28.419 "superblock": false, 00:11:28.419 "num_base_bdevs": 2, 00:11:28.419 "num_base_bdevs_discovered": 1, 00:11:28.419 "num_base_bdevs_operational": 1, 00:11:28.419 "base_bdevs_list": [ 00:11:28.419 { 00:11:28.419 "name": null, 00:11:28.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.419 "is_configured": false, 00:11:28.419 "data_offset": 0, 00:11:28.419 "data_size": 65536 00:11:28.419 }, 00:11:28.419 { 00:11:28.419 "name": "BaseBdev2", 00:11:28.419 "uuid": "3fa5a475-1fc1-5f02-954e-c884cd5710ef", 00:11:28.419 "is_configured": true, 00:11:28.419 "data_offset": 0, 00:11:28.419 "data_size": 65536 00:11:28.419 } 00:11:28.419 ] 00:11:28.419 }' 00:11:28.419 04:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.419 04:27:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.986 04:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:28.986 04:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:28.986 04:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:28.986 04:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:28.986 04:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:28.986 04:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.986 04:27:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.986 04:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.986 04:27:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.986 04:27:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.986 04:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:28.986 "name": "raid_bdev1", 00:11:28.986 "uuid": "1a8ed432-b2c4-4ea6-a873-b8f731047626", 00:11:28.986 "strip_size_kb": 0, 00:11:28.986 "state": "online", 00:11:28.986 "raid_level": "raid1", 00:11:28.986 "superblock": false, 00:11:28.986 "num_base_bdevs": 2, 00:11:28.986 "num_base_bdevs_discovered": 1, 00:11:28.986 "num_base_bdevs_operational": 1, 00:11:28.986 "base_bdevs_list": [ 00:11:28.986 { 00:11:28.986 "name": null, 00:11:28.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.986 "is_configured": false, 00:11:28.986 "data_offset": 0, 00:11:28.986 "data_size": 65536 00:11:28.986 }, 00:11:28.986 { 00:11:28.986 "name": "BaseBdev2", 00:11:28.986 "uuid": "3fa5a475-1fc1-5f02-954e-c884cd5710ef", 00:11:28.986 "is_configured": true, 00:11:28.986 "data_offset": 0, 00:11:28.986 "data_size": 65536 00:11:28.986 } 00:11:28.986 ] 00:11:28.986 }' 00:11:28.986 04:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:28.986 04:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:28.986 04:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:28.986 04:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:28.986 04:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:28.986 04:27:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.986 04:27:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.986 [2024-12-13 04:27:28.919308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:28.986 [2024-12-13 04:27:28.927447] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d062f0 00:11:28.986 04:27:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.986 04:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:28.986 [2024-12-13 04:27:28.929748] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:29.922 04:27:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:29.922 04:27:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:29.922 04:27:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:29.922 04:27:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:29.922 04:27:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:30.181 04:27:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.181 04:27:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.181 04:27:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.181 04:27:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:30.181 04:27:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.181 04:27:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:30.181 "name": "raid_bdev1", 00:11:30.181 "uuid": "1a8ed432-b2c4-4ea6-a873-b8f731047626", 00:11:30.181 "strip_size_kb": 0, 00:11:30.181 "state": "online", 00:11:30.181 "raid_level": "raid1", 00:11:30.181 "superblock": false, 00:11:30.181 "num_base_bdevs": 2, 00:11:30.181 "num_base_bdevs_discovered": 2, 00:11:30.181 "num_base_bdevs_operational": 2, 00:11:30.181 "process": { 00:11:30.181 "type": "rebuild", 00:11:30.181 "target": "spare", 00:11:30.181 "progress": { 00:11:30.181 "blocks": 20480, 00:11:30.181 "percent": 31 00:11:30.181 } 00:11:30.181 }, 00:11:30.182 "base_bdevs_list": [ 00:11:30.182 { 00:11:30.182 "name": "spare", 00:11:30.182 "uuid": "907f5030-8a8e-5ae3-860b-083b7629f2fc", 00:11:30.182 "is_configured": true, 00:11:30.182 "data_offset": 0, 00:11:30.182 "data_size": 65536 00:11:30.182 }, 00:11:30.182 { 00:11:30.182 "name": "BaseBdev2", 00:11:30.182 "uuid": "3fa5a475-1fc1-5f02-954e-c884cd5710ef", 00:11:30.182 "is_configured": true, 00:11:30.182 "data_offset": 0, 00:11:30.182 "data_size": 65536 00:11:30.182 } 00:11:30.182 ] 00:11:30.182 }' 00:11:30.182 04:27:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:30.182 04:27:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:30.182 04:27:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:30.182 04:27:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:30.182 04:27:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:11:30.182 04:27:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:30.182 04:27:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:30.182 04:27:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:30.182 04:27:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=299 00:11:30.182 04:27:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:30.182 04:27:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:30.182 04:27:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:30.182 04:27:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:30.182 04:27:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:30.182 04:27:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:30.182 04:27:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.182 04:27:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.182 04:27:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.182 04:27:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:30.182 04:27:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.182 04:27:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:30.182 "name": "raid_bdev1", 00:11:30.182 "uuid": "1a8ed432-b2c4-4ea6-a873-b8f731047626", 00:11:30.182 "strip_size_kb": 0, 00:11:30.182 "state": "online", 00:11:30.182 "raid_level": "raid1", 00:11:30.182 "superblock": false, 00:11:30.182 "num_base_bdevs": 2, 00:11:30.182 "num_base_bdevs_discovered": 2, 00:11:30.182 "num_base_bdevs_operational": 2, 00:11:30.182 "process": { 00:11:30.182 "type": "rebuild", 00:11:30.182 "target": "spare", 00:11:30.182 "progress": { 00:11:30.182 "blocks": 22528, 00:11:30.182 "percent": 34 00:11:30.182 } 00:11:30.182 }, 00:11:30.182 "base_bdevs_list": [ 00:11:30.182 { 00:11:30.182 "name": "spare", 00:11:30.182 "uuid": "907f5030-8a8e-5ae3-860b-083b7629f2fc", 00:11:30.182 "is_configured": true, 00:11:30.182 "data_offset": 0, 00:11:30.182 "data_size": 65536 00:11:30.182 }, 00:11:30.182 { 00:11:30.182 "name": "BaseBdev2", 00:11:30.182 "uuid": "3fa5a475-1fc1-5f02-954e-c884cd5710ef", 00:11:30.182 "is_configured": true, 00:11:30.182 "data_offset": 0, 00:11:30.182 "data_size": 65536 00:11:30.182 } 00:11:30.182 ] 00:11:30.182 }' 00:11:30.182 04:27:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:30.182 04:27:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:30.182 04:27:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:30.441 04:27:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:30.441 04:27:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:31.378 04:27:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:31.378 04:27:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:31.378 04:27:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:31.378 04:27:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:31.378 04:27:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:31.378 04:27:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:31.378 04:27:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.378 04:27:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.378 04:27:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.378 04:27:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:31.378 04:27:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.378 04:27:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:31.378 "name": "raid_bdev1", 00:11:31.378 "uuid": "1a8ed432-b2c4-4ea6-a873-b8f731047626", 00:11:31.378 "strip_size_kb": 0, 00:11:31.378 "state": "online", 00:11:31.378 "raid_level": "raid1", 00:11:31.379 "superblock": false, 00:11:31.379 "num_base_bdevs": 2, 00:11:31.379 "num_base_bdevs_discovered": 2, 00:11:31.379 "num_base_bdevs_operational": 2, 00:11:31.379 "process": { 00:11:31.379 "type": "rebuild", 00:11:31.379 "target": "spare", 00:11:31.379 "progress": { 00:11:31.379 "blocks": 47104, 00:11:31.379 "percent": 71 00:11:31.379 } 00:11:31.379 }, 00:11:31.379 "base_bdevs_list": [ 00:11:31.379 { 00:11:31.379 "name": "spare", 00:11:31.379 "uuid": "907f5030-8a8e-5ae3-860b-083b7629f2fc", 00:11:31.379 "is_configured": true, 00:11:31.379 "data_offset": 0, 00:11:31.379 "data_size": 65536 00:11:31.379 }, 00:11:31.379 { 00:11:31.379 "name": "BaseBdev2", 00:11:31.379 "uuid": "3fa5a475-1fc1-5f02-954e-c884cd5710ef", 00:11:31.379 "is_configured": true, 00:11:31.379 "data_offset": 0, 00:11:31.379 "data_size": 65536 00:11:31.379 } 00:11:31.379 ] 00:11:31.379 }' 00:11:31.379 04:27:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:31.379 04:27:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:31.379 04:27:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:31.379 04:27:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:31.379 04:27:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:32.316 [2024-12-13 04:27:32.150513] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:32.316 [2024-12-13 04:27:32.150606] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:32.316 [2024-12-13 04:27:32.150653] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:32.576 04:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:32.576 04:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:32.576 04:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:32.576 04:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:32.576 04:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:32.576 04:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:32.576 04:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.576 04:27:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.576 04:27:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.576 04:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:32.576 04:27:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.576 04:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:32.576 "name": "raid_bdev1", 00:11:32.576 "uuid": "1a8ed432-b2c4-4ea6-a873-b8f731047626", 00:11:32.576 "strip_size_kb": 0, 00:11:32.576 "state": "online", 00:11:32.576 "raid_level": "raid1", 00:11:32.576 "superblock": false, 00:11:32.576 "num_base_bdevs": 2, 00:11:32.576 "num_base_bdevs_discovered": 2, 00:11:32.576 "num_base_bdevs_operational": 2, 00:11:32.576 "base_bdevs_list": [ 00:11:32.576 { 00:11:32.576 "name": "spare", 00:11:32.576 "uuid": "907f5030-8a8e-5ae3-860b-083b7629f2fc", 00:11:32.576 "is_configured": true, 00:11:32.576 "data_offset": 0, 00:11:32.576 "data_size": 65536 00:11:32.576 }, 00:11:32.576 { 00:11:32.576 "name": "BaseBdev2", 00:11:32.576 "uuid": "3fa5a475-1fc1-5f02-954e-c884cd5710ef", 00:11:32.576 "is_configured": true, 00:11:32.576 "data_offset": 0, 00:11:32.576 "data_size": 65536 00:11:32.576 } 00:11:32.576 ] 00:11:32.576 }' 00:11:32.576 04:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:32.576 04:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:32.576 04:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:32.576 04:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:32.576 04:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:11:32.576 04:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:32.576 04:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:32.576 04:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:32.576 04:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:32.576 04:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:32.576 04:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.576 04:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:32.576 04:27:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.576 04:27:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.576 04:27:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.576 04:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:32.576 "name": "raid_bdev1", 00:11:32.576 "uuid": "1a8ed432-b2c4-4ea6-a873-b8f731047626", 00:11:32.576 "strip_size_kb": 0, 00:11:32.576 "state": "online", 00:11:32.576 "raid_level": "raid1", 00:11:32.576 "superblock": false, 00:11:32.576 "num_base_bdevs": 2, 00:11:32.576 "num_base_bdevs_discovered": 2, 00:11:32.576 "num_base_bdevs_operational": 2, 00:11:32.576 "base_bdevs_list": [ 00:11:32.576 { 00:11:32.576 "name": "spare", 00:11:32.576 "uuid": "907f5030-8a8e-5ae3-860b-083b7629f2fc", 00:11:32.576 "is_configured": true, 00:11:32.576 "data_offset": 0, 00:11:32.576 "data_size": 65536 00:11:32.576 }, 00:11:32.576 { 00:11:32.576 "name": "BaseBdev2", 00:11:32.576 "uuid": "3fa5a475-1fc1-5f02-954e-c884cd5710ef", 00:11:32.576 "is_configured": true, 00:11:32.576 "data_offset": 0, 00:11:32.576 "data_size": 65536 00:11:32.577 } 00:11:32.577 ] 00:11:32.577 }' 00:11:32.577 04:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:32.836 04:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:32.836 04:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:32.836 04:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:32.836 04:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:32.836 04:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:32.836 04:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:32.836 04:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:32.836 04:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:32.836 04:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:32.836 04:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.836 04:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.836 04:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.836 04:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.836 04:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:32.836 04:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.836 04:27:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.836 04:27:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.836 04:27:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.836 04:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.836 "name": "raid_bdev1", 00:11:32.836 "uuid": "1a8ed432-b2c4-4ea6-a873-b8f731047626", 00:11:32.836 "strip_size_kb": 0, 00:11:32.836 "state": "online", 00:11:32.836 "raid_level": "raid1", 00:11:32.836 "superblock": false, 00:11:32.836 "num_base_bdevs": 2, 00:11:32.836 "num_base_bdevs_discovered": 2, 00:11:32.836 "num_base_bdevs_operational": 2, 00:11:32.836 "base_bdevs_list": [ 00:11:32.836 { 00:11:32.836 "name": "spare", 00:11:32.836 "uuid": "907f5030-8a8e-5ae3-860b-083b7629f2fc", 00:11:32.836 "is_configured": true, 00:11:32.836 "data_offset": 0, 00:11:32.836 "data_size": 65536 00:11:32.836 }, 00:11:32.836 { 00:11:32.836 "name": "BaseBdev2", 00:11:32.836 "uuid": "3fa5a475-1fc1-5f02-954e-c884cd5710ef", 00:11:32.836 "is_configured": true, 00:11:32.836 "data_offset": 0, 00:11:32.836 "data_size": 65536 00:11:32.836 } 00:11:32.836 ] 00:11:32.836 }' 00:11:32.836 04:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.836 04:27:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.096 04:27:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:33.096 04:27:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.096 04:27:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.096 [2024-12-13 04:27:33.048549] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:33.096 [2024-12-13 04:27:33.048641] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:33.096 [2024-12-13 04:27:33.048765] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:33.096 [2024-12-13 04:27:33.048860] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:33.096 [2024-12-13 04:27:33.048891] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:11:33.096 04:27:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.096 04:27:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.096 04:27:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:11:33.096 04:27:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.096 04:27:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.096 04:27:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.096 04:27:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:33.096 04:27:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:33.096 04:27:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:33.097 04:27:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:33.097 04:27:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:33.097 04:27:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:33.097 04:27:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:33.097 04:27:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:33.097 04:27:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:33.097 04:27:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:33.097 04:27:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:33.097 04:27:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:33.097 04:27:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:33.357 /dev/nbd0 00:11:33.357 04:27:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:33.357 04:27:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:33.357 04:27:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:33.357 04:27:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:11:33.357 04:27:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:33.357 04:27:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:33.357 04:27:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:33.357 04:27:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:11:33.357 04:27:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:33.357 04:27:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:33.357 04:27:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:33.357 1+0 records in 00:11:33.357 1+0 records out 00:11:33.357 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412948 s, 9.9 MB/s 00:11:33.357 04:27:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:33.357 04:27:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:11:33.357 04:27:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:33.357 04:27:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:33.357 04:27:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:11:33.357 04:27:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:33.357 04:27:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:33.357 04:27:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:33.616 /dev/nbd1 00:11:33.616 04:27:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:33.616 04:27:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:33.616 04:27:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:33.616 04:27:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:11:33.617 04:27:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:33.617 04:27:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:33.617 04:27:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:33.617 04:27:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:11:33.617 04:27:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:33.617 04:27:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:33.617 04:27:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:33.617 1+0 records in 00:11:33.617 1+0 records out 00:11:33.617 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000508155 s, 8.1 MB/s 00:11:33.617 04:27:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:33.617 04:27:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:11:33.617 04:27:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:33.617 04:27:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:33.617 04:27:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:11:33.617 04:27:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:33.617 04:27:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:33.617 04:27:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:33.876 04:27:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:33.876 04:27:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:33.876 04:27:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:33.876 04:27:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:33.876 04:27:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:33.876 04:27:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:33.876 04:27:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:33.876 04:27:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:33.876 04:27:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:33.876 04:27:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:33.876 04:27:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:33.876 04:27:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:33.876 04:27:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:34.137 04:27:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:34.137 04:27:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:34.137 04:27:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:34.137 04:27:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:34.137 04:27:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:34.137 04:27:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:34.137 04:27:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:34.137 04:27:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:34.137 04:27:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:34.137 04:27:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:34.137 04:27:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:34.137 04:27:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:34.137 04:27:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:11:34.137 04:27:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 87716 00:11:34.137 04:27:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 87716 ']' 00:11:34.137 04:27:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 87716 00:11:34.137 04:27:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:11:34.137 04:27:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:34.137 04:27:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87716 00:11:34.137 killing process with pid 87716 00:11:34.137 Received shutdown signal, test time was about 60.000000 seconds 00:11:34.137 00:11:34.137 Latency(us) 00:11:34.137 [2024-12-13T04:27:34.152Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:34.137 [2024-12-13T04:27:34.152Z] =================================================================================================================== 00:11:34.137 [2024-12-13T04:27:34.152Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:34.137 04:27:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:34.137 04:27:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:34.137 04:27:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87716' 00:11:34.137 04:27:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 87716 00:11:34.137 [2024-12-13 04:27:34.143778] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:34.137 04:27:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 87716 00:11:34.396 [2024-12-13 04:27:34.202999] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:34.656 04:27:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:11:34.656 00:11:34.656 real 0m14.214s 00:11:34.656 user 0m16.142s 00:11:34.656 sys 0m3.128s 00:11:34.656 ************************************ 00:11:34.656 END TEST raid_rebuild_test 00:11:34.656 ************************************ 00:11:34.656 04:27:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:34.656 04:27:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.656 04:27:34 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:11:34.656 04:27:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:34.656 04:27:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.656 04:27:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:34.656 ************************************ 00:11:34.656 START TEST raid_rebuild_test_sb 00:11:34.656 ************************************ 00:11:34.656 04:27:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:11:34.656 04:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:34.656 04:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:34.656 04:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:11:34.656 04:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:34.656 04:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:34.656 04:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:34.656 04:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:34.656 04:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:34.656 04:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:34.656 04:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:34.656 04:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:34.656 04:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:34.656 04:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:34.656 04:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:34.656 04:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:34.656 04:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:34.656 04:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:34.656 04:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:34.656 04:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:34.656 04:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:34.656 04:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:34.656 04:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:34.656 04:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:11:34.656 04:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:11:34.656 04:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=88130 00:11:34.656 04:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:34.656 04:27:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 88130 00:11:34.656 04:27:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 88130 ']' 00:11:34.656 04:27:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.656 04:27:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:34.656 04:27:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.656 04:27:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:34.656 04:27:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.916 [2024-12-13 04:27:34.695882] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:11:34.916 [2024-12-13 04:27:34.696092] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:11:34.916 Zero copy mechanism will not be used. 00:11:34.916 -allocations --file-prefix=spdk_pid88130 ] 00:11:34.917 [2024-12-13 04:27:34.853059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.917 [2024-12-13 04:27:34.895208] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.176 [2024-12-13 04:27:34.973183] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:35.176 [2024-12-13 04:27:34.973306] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:35.745 04:27:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:35.745 04:27:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:35.745 04:27:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:35.745 04:27:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:35.745 04:27:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.745 04:27:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.745 BaseBdev1_malloc 00:11:35.745 04:27:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.745 04:27:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:35.745 04:27:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.745 04:27:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.745 [2024-12-13 04:27:35.527561] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:35.745 [2024-12-13 04:27:35.527702] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.745 [2024-12-13 04:27:35.527738] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:11:35.745 [2024-12-13 04:27:35.527751] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.745 [2024-12-13 04:27:35.530191] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.745 [2024-12-13 04:27:35.530282] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:35.745 BaseBdev1 00:11:35.745 04:27:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.745 04:27:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:35.745 04:27:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:35.745 04:27:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.745 04:27:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.745 BaseBdev2_malloc 00:11:35.745 04:27:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.745 04:27:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:35.745 04:27:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.745 04:27:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.745 [2024-12-13 04:27:35.562159] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:35.745 [2024-12-13 04:27:35.562213] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.745 [2024-12-13 04:27:35.562237] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:35.745 [2024-12-13 04:27:35.562246] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.745 [2024-12-13 04:27:35.564643] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.745 [2024-12-13 04:27:35.564731] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:35.745 BaseBdev2 00:11:35.745 04:27:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.745 04:27:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:35.745 04:27:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.745 04:27:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.745 spare_malloc 00:11:35.745 04:27:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.745 04:27:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:35.745 04:27:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.745 04:27:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.745 spare_delay 00:11:35.745 04:27:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.745 04:27:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:35.745 04:27:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.745 04:27:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.745 [2024-12-13 04:27:35.608854] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:35.745 [2024-12-13 04:27:35.608903] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.745 [2024-12-13 04:27:35.608924] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:35.745 [2024-12-13 04:27:35.608933] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.745 [2024-12-13 04:27:35.611389] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.745 [2024-12-13 04:27:35.611424] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:35.745 spare 00:11:35.745 04:27:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.745 04:27:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:35.745 04:27:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.745 04:27:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.745 [2024-12-13 04:27:35.620882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:35.745 [2024-12-13 04:27:35.623067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:35.745 [2024-12-13 04:27:35.623225] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:11:35.745 [2024-12-13 04:27:35.623238] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:35.745 [2024-12-13 04:27:35.623538] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:11:35.745 [2024-12-13 04:27:35.623683] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:11:35.745 [2024-12-13 04:27:35.623747] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:11:35.745 [2024-12-13 04:27:35.623872] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:35.745 04:27:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.745 04:27:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:35.745 04:27:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:35.745 04:27:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:35.745 04:27:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.745 04:27:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.745 04:27:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:35.745 04:27:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.745 04:27:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.745 04:27:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.745 04:27:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.745 04:27:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.746 04:27:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:35.746 04:27:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.746 04:27:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.746 04:27:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.746 04:27:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.746 "name": "raid_bdev1", 00:11:35.746 "uuid": "24ca36b6-5230-4dbd-af25-0e1687da624b", 00:11:35.746 "strip_size_kb": 0, 00:11:35.746 "state": "online", 00:11:35.746 "raid_level": "raid1", 00:11:35.746 "superblock": true, 00:11:35.746 "num_base_bdevs": 2, 00:11:35.746 "num_base_bdevs_discovered": 2, 00:11:35.746 "num_base_bdevs_operational": 2, 00:11:35.746 "base_bdevs_list": [ 00:11:35.746 { 00:11:35.746 "name": "BaseBdev1", 00:11:35.746 "uuid": "5f14dbbe-261b-55f0-a26f-af1869f06287", 00:11:35.746 "is_configured": true, 00:11:35.746 "data_offset": 2048, 00:11:35.746 "data_size": 63488 00:11:35.746 }, 00:11:35.746 { 00:11:35.746 "name": "BaseBdev2", 00:11:35.746 "uuid": "c3174816-0c12-535a-8a7a-b0028fee9ebe", 00:11:35.746 "is_configured": true, 00:11:35.746 "data_offset": 2048, 00:11:35.746 "data_size": 63488 00:11:35.746 } 00:11:35.746 ] 00:11:35.746 }' 00:11:35.746 04:27:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.746 04:27:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.315 04:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:36.315 04:27:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.315 04:27:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.315 04:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:36.315 [2024-12-13 04:27:36.044762] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:36.315 04:27:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.315 04:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:11:36.315 04:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.315 04:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:36.315 04:27:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.315 04:27:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.315 04:27:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.315 04:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:11:36.315 04:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:36.315 04:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:36.315 04:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:36.315 04:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:36.315 04:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:36.315 04:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:36.315 04:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:36.315 04:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:36.315 04:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:36.315 04:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:36.315 04:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:36.315 04:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:36.315 04:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:36.574 [2024-12-13 04:27:36.340172] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:11:36.574 /dev/nbd0 00:11:36.574 04:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:36.574 04:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:36.574 04:27:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:36.574 04:27:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:11:36.574 04:27:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:36.574 04:27:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:36.574 04:27:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:36.574 04:27:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:11:36.574 04:27:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:36.574 04:27:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:36.574 04:27:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:36.574 1+0 records in 00:11:36.574 1+0 records out 00:11:36.574 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000232018 s, 17.7 MB/s 00:11:36.574 04:27:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:36.574 04:27:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:11:36.574 04:27:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:36.574 04:27:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:36.574 04:27:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:11:36.574 04:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:36.574 04:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:36.574 04:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:36.574 04:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:36.574 04:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:11:40.769 63488+0 records in 00:11:40.769 63488+0 records out 00:11:40.769 32505856 bytes (33 MB, 31 MiB) copied, 3.66462 s, 8.9 MB/s 00:11:40.769 04:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:40.769 04:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:40.769 04:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:40.769 04:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:40.769 04:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:11:40.769 04:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:40.769 04:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:40.769 04:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:40.769 [2024-12-13 04:27:40.301162] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:40.769 04:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:40.769 04:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:40.769 04:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:40.769 04:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:40.769 04:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:40.769 04:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:40.769 04:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:40.769 04:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:40.769 04:27:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.769 04:27:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.769 [2024-12-13 04:27:40.318426] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:40.769 04:27:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.769 04:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:40.769 04:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:40.769 04:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:40.769 04:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.769 04:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.769 04:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:40.769 04:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.769 04:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.769 04:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.769 04:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.769 04:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.769 04:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.769 04:27:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.769 04:27:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.769 04:27:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.769 04:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.769 "name": "raid_bdev1", 00:11:40.769 "uuid": "24ca36b6-5230-4dbd-af25-0e1687da624b", 00:11:40.769 "strip_size_kb": 0, 00:11:40.769 "state": "online", 00:11:40.769 "raid_level": "raid1", 00:11:40.769 "superblock": true, 00:11:40.769 "num_base_bdevs": 2, 00:11:40.769 "num_base_bdevs_discovered": 1, 00:11:40.769 "num_base_bdevs_operational": 1, 00:11:40.769 "base_bdevs_list": [ 00:11:40.769 { 00:11:40.769 "name": null, 00:11:40.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.769 "is_configured": false, 00:11:40.769 "data_offset": 0, 00:11:40.769 "data_size": 63488 00:11:40.769 }, 00:11:40.769 { 00:11:40.769 "name": "BaseBdev2", 00:11:40.769 "uuid": "c3174816-0c12-535a-8a7a-b0028fee9ebe", 00:11:40.769 "is_configured": true, 00:11:40.769 "data_offset": 2048, 00:11:40.769 "data_size": 63488 00:11:40.769 } 00:11:40.769 ] 00:11:40.769 }' 00:11:40.769 04:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.769 04:27:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.769 04:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:40.769 04:27:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.769 04:27:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.769 [2024-12-13 04:27:40.733710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:40.769 [2024-12-13 04:27:40.738994] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e280 00:11:40.769 04:27:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.769 04:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:40.769 [2024-12-13 04:27:40.740888] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:42.151 04:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:42.151 04:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:42.151 04:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:42.151 04:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:42.151 04:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:42.151 04:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.151 04:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.151 04:27:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.151 04:27:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.151 04:27:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.151 04:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:42.151 "name": "raid_bdev1", 00:11:42.151 "uuid": "24ca36b6-5230-4dbd-af25-0e1687da624b", 00:11:42.151 "strip_size_kb": 0, 00:11:42.151 "state": "online", 00:11:42.151 "raid_level": "raid1", 00:11:42.151 "superblock": true, 00:11:42.151 "num_base_bdevs": 2, 00:11:42.151 "num_base_bdevs_discovered": 2, 00:11:42.151 "num_base_bdevs_operational": 2, 00:11:42.151 "process": { 00:11:42.151 "type": "rebuild", 00:11:42.151 "target": "spare", 00:11:42.151 "progress": { 00:11:42.151 "blocks": 20480, 00:11:42.151 "percent": 32 00:11:42.151 } 00:11:42.151 }, 00:11:42.151 "base_bdevs_list": [ 00:11:42.151 { 00:11:42.151 "name": "spare", 00:11:42.151 "uuid": "fa8f30d3-997b-57a4-8106-5d9b17d9ae54", 00:11:42.151 "is_configured": true, 00:11:42.151 "data_offset": 2048, 00:11:42.151 "data_size": 63488 00:11:42.151 }, 00:11:42.151 { 00:11:42.151 "name": "BaseBdev2", 00:11:42.151 "uuid": "c3174816-0c12-535a-8a7a-b0028fee9ebe", 00:11:42.151 "is_configured": true, 00:11:42.151 "data_offset": 2048, 00:11:42.151 "data_size": 63488 00:11:42.151 } 00:11:42.151 ] 00:11:42.151 }' 00:11:42.151 04:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:42.151 04:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:42.151 04:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:42.151 04:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:42.151 04:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:42.151 04:27:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.151 04:27:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.151 [2024-12-13 04:27:41.877237] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:42.151 [2024-12-13 04:27:41.945652] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:42.151 [2024-12-13 04:27:41.945712] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:42.151 [2024-12-13 04:27:41.945731] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:42.151 [2024-12-13 04:27:41.945738] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:42.151 04:27:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.151 04:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:42.151 04:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:42.151 04:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:42.151 04:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.151 04:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.151 04:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:42.151 04:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.151 04:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.151 04:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.151 04:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.151 04:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.151 04:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.151 04:27:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.151 04:27:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.151 04:27:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.151 04:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.151 "name": "raid_bdev1", 00:11:42.151 "uuid": "24ca36b6-5230-4dbd-af25-0e1687da624b", 00:11:42.151 "strip_size_kb": 0, 00:11:42.151 "state": "online", 00:11:42.151 "raid_level": "raid1", 00:11:42.151 "superblock": true, 00:11:42.151 "num_base_bdevs": 2, 00:11:42.151 "num_base_bdevs_discovered": 1, 00:11:42.151 "num_base_bdevs_operational": 1, 00:11:42.151 "base_bdevs_list": [ 00:11:42.151 { 00:11:42.151 "name": null, 00:11:42.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.151 "is_configured": false, 00:11:42.151 "data_offset": 0, 00:11:42.151 "data_size": 63488 00:11:42.151 }, 00:11:42.151 { 00:11:42.151 "name": "BaseBdev2", 00:11:42.151 "uuid": "c3174816-0c12-535a-8a7a-b0028fee9ebe", 00:11:42.151 "is_configured": true, 00:11:42.151 "data_offset": 2048, 00:11:42.151 "data_size": 63488 00:11:42.151 } 00:11:42.151 ] 00:11:42.151 }' 00:11:42.151 04:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.151 04:27:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.423 04:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:42.423 04:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:42.423 04:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:42.423 04:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:42.423 04:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:42.423 04:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.423 04:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.423 04:27:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.423 04:27:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.423 04:27:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.708 04:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:42.708 "name": "raid_bdev1", 00:11:42.708 "uuid": "24ca36b6-5230-4dbd-af25-0e1687da624b", 00:11:42.708 "strip_size_kb": 0, 00:11:42.708 "state": "online", 00:11:42.708 "raid_level": "raid1", 00:11:42.708 "superblock": true, 00:11:42.708 "num_base_bdevs": 2, 00:11:42.708 "num_base_bdevs_discovered": 1, 00:11:42.708 "num_base_bdevs_operational": 1, 00:11:42.708 "base_bdevs_list": [ 00:11:42.708 { 00:11:42.708 "name": null, 00:11:42.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.708 "is_configured": false, 00:11:42.708 "data_offset": 0, 00:11:42.708 "data_size": 63488 00:11:42.708 }, 00:11:42.708 { 00:11:42.708 "name": "BaseBdev2", 00:11:42.708 "uuid": "c3174816-0c12-535a-8a7a-b0028fee9ebe", 00:11:42.708 "is_configured": true, 00:11:42.708 "data_offset": 2048, 00:11:42.708 "data_size": 63488 00:11:42.708 } 00:11:42.708 ] 00:11:42.708 }' 00:11:42.708 04:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:42.708 04:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:42.708 04:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:42.708 04:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:42.708 04:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:42.708 04:27:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.708 04:27:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.708 [2024-12-13 04:27:42.545804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:42.708 [2024-12-13 04:27:42.554878] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e350 00:11:42.708 04:27:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.708 04:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:42.708 [2024-12-13 04:27:42.557216] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:43.693 04:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:43.693 04:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:43.693 04:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:43.693 04:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:43.693 04:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:43.693 04:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.693 04:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:43.693 04:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.693 04:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.693 04:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.693 04:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:43.693 "name": "raid_bdev1", 00:11:43.693 "uuid": "24ca36b6-5230-4dbd-af25-0e1687da624b", 00:11:43.693 "strip_size_kb": 0, 00:11:43.693 "state": "online", 00:11:43.693 "raid_level": "raid1", 00:11:43.693 "superblock": true, 00:11:43.693 "num_base_bdevs": 2, 00:11:43.693 "num_base_bdevs_discovered": 2, 00:11:43.693 "num_base_bdevs_operational": 2, 00:11:43.693 "process": { 00:11:43.693 "type": "rebuild", 00:11:43.693 "target": "spare", 00:11:43.693 "progress": { 00:11:43.693 "blocks": 20480, 00:11:43.693 "percent": 32 00:11:43.693 } 00:11:43.693 }, 00:11:43.693 "base_bdevs_list": [ 00:11:43.693 { 00:11:43.693 "name": "spare", 00:11:43.693 "uuid": "fa8f30d3-997b-57a4-8106-5d9b17d9ae54", 00:11:43.693 "is_configured": true, 00:11:43.693 "data_offset": 2048, 00:11:43.693 "data_size": 63488 00:11:43.693 }, 00:11:43.693 { 00:11:43.693 "name": "BaseBdev2", 00:11:43.693 "uuid": "c3174816-0c12-535a-8a7a-b0028fee9ebe", 00:11:43.693 "is_configured": true, 00:11:43.693 "data_offset": 2048, 00:11:43.693 "data_size": 63488 00:11:43.693 } 00:11:43.693 ] 00:11:43.693 }' 00:11:43.693 04:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:43.693 04:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:43.693 04:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:43.693 04:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:43.693 04:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:11:43.693 04:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:11:43.693 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:11:43.693 04:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:43.693 04:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:43.693 04:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:43.693 04:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=312 00:11:43.693 04:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:43.693 04:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:43.693 04:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:43.693 04:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:43.693 04:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:43.693 04:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:43.693 04:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.693 04:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:43.693 04:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.693 04:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.953 04:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.953 04:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:43.953 "name": "raid_bdev1", 00:11:43.953 "uuid": "24ca36b6-5230-4dbd-af25-0e1687da624b", 00:11:43.953 "strip_size_kb": 0, 00:11:43.953 "state": "online", 00:11:43.953 "raid_level": "raid1", 00:11:43.953 "superblock": true, 00:11:43.953 "num_base_bdevs": 2, 00:11:43.953 "num_base_bdevs_discovered": 2, 00:11:43.953 "num_base_bdevs_operational": 2, 00:11:43.953 "process": { 00:11:43.953 "type": "rebuild", 00:11:43.953 "target": "spare", 00:11:43.953 "progress": { 00:11:43.953 "blocks": 22528, 00:11:43.953 "percent": 35 00:11:43.953 } 00:11:43.953 }, 00:11:43.953 "base_bdevs_list": [ 00:11:43.953 { 00:11:43.953 "name": "spare", 00:11:43.953 "uuid": "fa8f30d3-997b-57a4-8106-5d9b17d9ae54", 00:11:43.953 "is_configured": true, 00:11:43.953 "data_offset": 2048, 00:11:43.953 "data_size": 63488 00:11:43.953 }, 00:11:43.953 { 00:11:43.953 "name": "BaseBdev2", 00:11:43.953 "uuid": "c3174816-0c12-535a-8a7a-b0028fee9ebe", 00:11:43.953 "is_configured": true, 00:11:43.953 "data_offset": 2048, 00:11:43.953 "data_size": 63488 00:11:43.953 } 00:11:43.953 ] 00:11:43.953 }' 00:11:43.953 04:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:43.953 04:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:43.953 04:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:43.953 04:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:43.953 04:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:44.892 04:27:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:44.892 04:27:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:44.892 04:27:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:44.892 04:27:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:44.892 04:27:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:44.892 04:27:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:44.892 04:27:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.892 04:27:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.892 04:27:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.892 04:27:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.892 04:27:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.892 04:27:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:44.892 "name": "raid_bdev1", 00:11:44.892 "uuid": "24ca36b6-5230-4dbd-af25-0e1687da624b", 00:11:44.892 "strip_size_kb": 0, 00:11:44.892 "state": "online", 00:11:44.892 "raid_level": "raid1", 00:11:44.892 "superblock": true, 00:11:44.892 "num_base_bdevs": 2, 00:11:44.892 "num_base_bdevs_discovered": 2, 00:11:44.892 "num_base_bdevs_operational": 2, 00:11:44.892 "process": { 00:11:44.892 "type": "rebuild", 00:11:44.892 "target": "spare", 00:11:44.892 "progress": { 00:11:44.892 "blocks": 45056, 00:11:44.892 "percent": 70 00:11:44.892 } 00:11:44.892 }, 00:11:44.892 "base_bdevs_list": [ 00:11:44.892 { 00:11:44.892 "name": "spare", 00:11:44.892 "uuid": "fa8f30d3-997b-57a4-8106-5d9b17d9ae54", 00:11:44.892 "is_configured": true, 00:11:44.892 "data_offset": 2048, 00:11:44.892 "data_size": 63488 00:11:44.892 }, 00:11:44.892 { 00:11:44.892 "name": "BaseBdev2", 00:11:44.892 "uuid": "c3174816-0c12-535a-8a7a-b0028fee9ebe", 00:11:44.892 "is_configured": true, 00:11:44.892 "data_offset": 2048, 00:11:44.892 "data_size": 63488 00:11:44.892 } 00:11:44.892 ] 00:11:44.892 }' 00:11:44.892 04:27:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:45.152 04:27:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:45.152 04:27:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:45.152 04:27:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:45.152 04:27:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:45.720 [2024-12-13 04:27:45.678122] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:45.720 [2024-12-13 04:27:45.678329] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:45.720 [2024-12-13 04:27:45.678483] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:45.980 04:27:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:45.980 04:27:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:45.980 04:27:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:45.980 04:27:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:45.980 04:27:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:45.980 04:27:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:45.980 04:27:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.980 04:27:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.980 04:27:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.980 04:27:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.240 04:27:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.240 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:46.240 "name": "raid_bdev1", 00:11:46.240 "uuid": "24ca36b6-5230-4dbd-af25-0e1687da624b", 00:11:46.240 "strip_size_kb": 0, 00:11:46.240 "state": "online", 00:11:46.240 "raid_level": "raid1", 00:11:46.240 "superblock": true, 00:11:46.240 "num_base_bdevs": 2, 00:11:46.240 "num_base_bdevs_discovered": 2, 00:11:46.240 "num_base_bdevs_operational": 2, 00:11:46.240 "base_bdevs_list": [ 00:11:46.240 { 00:11:46.240 "name": "spare", 00:11:46.240 "uuid": "fa8f30d3-997b-57a4-8106-5d9b17d9ae54", 00:11:46.240 "is_configured": true, 00:11:46.240 "data_offset": 2048, 00:11:46.240 "data_size": 63488 00:11:46.240 }, 00:11:46.240 { 00:11:46.240 "name": "BaseBdev2", 00:11:46.240 "uuid": "c3174816-0c12-535a-8a7a-b0028fee9ebe", 00:11:46.240 "is_configured": true, 00:11:46.240 "data_offset": 2048, 00:11:46.240 "data_size": 63488 00:11:46.240 } 00:11:46.240 ] 00:11:46.240 }' 00:11:46.240 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:46.240 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:46.240 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:46.240 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:46.240 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:11:46.240 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:46.240 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:46.240 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:46.240 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:46.240 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:46.240 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.240 04:27:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.240 04:27:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.240 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.240 04:27:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.240 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:46.240 "name": "raid_bdev1", 00:11:46.240 "uuid": "24ca36b6-5230-4dbd-af25-0e1687da624b", 00:11:46.240 "strip_size_kb": 0, 00:11:46.240 "state": "online", 00:11:46.240 "raid_level": "raid1", 00:11:46.240 "superblock": true, 00:11:46.240 "num_base_bdevs": 2, 00:11:46.240 "num_base_bdevs_discovered": 2, 00:11:46.240 "num_base_bdevs_operational": 2, 00:11:46.240 "base_bdevs_list": [ 00:11:46.240 { 00:11:46.240 "name": "spare", 00:11:46.240 "uuid": "fa8f30d3-997b-57a4-8106-5d9b17d9ae54", 00:11:46.240 "is_configured": true, 00:11:46.240 "data_offset": 2048, 00:11:46.240 "data_size": 63488 00:11:46.240 }, 00:11:46.240 { 00:11:46.240 "name": "BaseBdev2", 00:11:46.240 "uuid": "c3174816-0c12-535a-8a7a-b0028fee9ebe", 00:11:46.240 "is_configured": true, 00:11:46.240 "data_offset": 2048, 00:11:46.240 "data_size": 63488 00:11:46.240 } 00:11:46.240 ] 00:11:46.240 }' 00:11:46.240 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:46.240 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:46.240 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:46.240 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:46.240 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:46.240 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.240 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:46.240 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.240 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.241 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:46.241 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.241 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.241 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.241 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.241 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.241 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.241 04:27:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.241 04:27:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.500 04:27:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.500 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.500 "name": "raid_bdev1", 00:11:46.500 "uuid": "24ca36b6-5230-4dbd-af25-0e1687da624b", 00:11:46.500 "strip_size_kb": 0, 00:11:46.500 "state": "online", 00:11:46.500 "raid_level": "raid1", 00:11:46.500 "superblock": true, 00:11:46.500 "num_base_bdevs": 2, 00:11:46.500 "num_base_bdevs_discovered": 2, 00:11:46.500 "num_base_bdevs_operational": 2, 00:11:46.500 "base_bdevs_list": [ 00:11:46.500 { 00:11:46.500 "name": "spare", 00:11:46.500 "uuid": "fa8f30d3-997b-57a4-8106-5d9b17d9ae54", 00:11:46.500 "is_configured": true, 00:11:46.500 "data_offset": 2048, 00:11:46.500 "data_size": 63488 00:11:46.500 }, 00:11:46.500 { 00:11:46.500 "name": "BaseBdev2", 00:11:46.500 "uuid": "c3174816-0c12-535a-8a7a-b0028fee9ebe", 00:11:46.500 "is_configured": true, 00:11:46.500 "data_offset": 2048, 00:11:46.500 "data_size": 63488 00:11:46.500 } 00:11:46.500 ] 00:11:46.500 }' 00:11:46.500 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.500 04:27:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.758 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:46.758 04:27:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.758 04:27:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.758 [2024-12-13 04:27:46.660668] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:46.758 [2024-12-13 04:27:46.660705] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:46.758 [2024-12-13 04:27:46.660828] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:46.758 [2024-12-13 04:27:46.660909] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:46.758 [2024-12-13 04:27:46.660924] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:11:46.758 04:27:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.758 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.758 04:27:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.758 04:27:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.758 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:11:46.758 04:27:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.758 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:46.758 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:46.758 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:46.758 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:46.758 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:46.758 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:46.758 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:46.758 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:46.758 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:46.758 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:46.758 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:46.758 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:46.758 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:47.018 /dev/nbd0 00:11:47.018 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:47.018 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:47.018 04:27:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:47.018 04:27:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:11:47.018 04:27:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:47.018 04:27:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:47.018 04:27:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:47.018 04:27:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:11:47.018 04:27:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:47.018 04:27:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:47.018 04:27:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:47.018 1+0 records in 00:11:47.018 1+0 records out 00:11:47.018 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000536344 s, 7.6 MB/s 00:11:47.018 04:27:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:47.018 04:27:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:11:47.018 04:27:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:47.018 04:27:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:47.018 04:27:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:11:47.018 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:47.018 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:47.018 04:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:47.277 /dev/nbd1 00:11:47.277 04:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:47.277 04:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:47.277 04:27:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:47.277 04:27:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:11:47.277 04:27:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:47.277 04:27:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:47.277 04:27:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:47.277 04:27:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:11:47.277 04:27:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:47.277 04:27:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:47.277 04:27:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:47.277 1+0 records in 00:11:47.277 1+0 records out 00:11:47.277 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321279 s, 12.7 MB/s 00:11:47.277 04:27:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:47.277 04:27:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:11:47.277 04:27:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:47.277 04:27:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:47.277 04:27:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:11:47.277 04:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:47.277 04:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:47.277 04:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:11:47.537 04:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:47.537 04:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:47.537 04:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:47.537 04:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:47.537 04:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:11:47.537 04:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:47.537 04:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:47.537 04:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:47.537 04:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:47.537 04:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:47.537 04:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:47.537 04:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:47.537 04:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:47.537 04:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:47.537 04:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:47.537 04:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:47.537 04:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:47.797 04:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:47.797 04:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:47.797 04:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:47.797 04:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:47.797 04:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:47.797 04:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:47.797 04:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:47.797 04:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:47.797 04:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:11:47.797 04:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:11:47.797 04:27:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.797 04:27:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.797 04:27:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.797 04:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:47.797 04:27:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.797 04:27:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.797 [2024-12-13 04:27:47.730594] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:47.797 [2024-12-13 04:27:47.730666] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.797 [2024-12-13 04:27:47.730691] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:47.797 [2024-12-13 04:27:47.730709] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.797 [2024-12-13 04:27:47.733269] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.797 [2024-12-13 04:27:47.733377] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:47.797 [2024-12-13 04:27:47.733506] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:47.797 [2024-12-13 04:27:47.733559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:47.797 [2024-12-13 04:27:47.733696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:47.797 spare 00:11:47.797 04:27:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.797 04:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:11:47.797 04:27:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.797 04:27:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.057 [2024-12-13 04:27:47.833632] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:11:48.057 [2024-12-13 04:27:47.833664] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:48.057 [2024-12-13 04:27:47.833992] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cae960 00:11:48.057 [2024-12-13 04:27:47.834193] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:11:48.057 [2024-12-13 04:27:47.834208] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:11:48.057 [2024-12-13 04:27:47.834400] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:48.057 04:27:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.057 04:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:48.057 04:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.057 04:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:48.057 04:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.057 04:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.057 04:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:48.057 04:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.057 04:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.057 04:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.057 04:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.057 04:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.057 04:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.057 04:27:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.057 04:27:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.057 04:27:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.057 04:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.057 "name": "raid_bdev1", 00:11:48.057 "uuid": "24ca36b6-5230-4dbd-af25-0e1687da624b", 00:11:48.057 "strip_size_kb": 0, 00:11:48.057 "state": "online", 00:11:48.057 "raid_level": "raid1", 00:11:48.057 "superblock": true, 00:11:48.057 "num_base_bdevs": 2, 00:11:48.057 "num_base_bdevs_discovered": 2, 00:11:48.057 "num_base_bdevs_operational": 2, 00:11:48.057 "base_bdevs_list": [ 00:11:48.057 { 00:11:48.057 "name": "spare", 00:11:48.057 "uuid": "fa8f30d3-997b-57a4-8106-5d9b17d9ae54", 00:11:48.057 "is_configured": true, 00:11:48.057 "data_offset": 2048, 00:11:48.057 "data_size": 63488 00:11:48.057 }, 00:11:48.057 { 00:11:48.057 "name": "BaseBdev2", 00:11:48.057 "uuid": "c3174816-0c12-535a-8a7a-b0028fee9ebe", 00:11:48.057 "is_configured": true, 00:11:48.057 "data_offset": 2048, 00:11:48.057 "data_size": 63488 00:11:48.057 } 00:11:48.057 ] 00:11:48.057 }' 00:11:48.057 04:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.057 04:27:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.317 04:27:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:48.317 04:27:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:48.317 04:27:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:48.317 04:27:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:48.317 04:27:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:48.317 04:27:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.317 04:27:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.317 04:27:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.317 04:27:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.317 04:27:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.317 04:27:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:48.317 "name": "raid_bdev1", 00:11:48.317 "uuid": "24ca36b6-5230-4dbd-af25-0e1687da624b", 00:11:48.317 "strip_size_kb": 0, 00:11:48.317 "state": "online", 00:11:48.317 "raid_level": "raid1", 00:11:48.317 "superblock": true, 00:11:48.317 "num_base_bdevs": 2, 00:11:48.317 "num_base_bdevs_discovered": 2, 00:11:48.317 "num_base_bdevs_operational": 2, 00:11:48.317 "base_bdevs_list": [ 00:11:48.317 { 00:11:48.317 "name": "spare", 00:11:48.317 "uuid": "fa8f30d3-997b-57a4-8106-5d9b17d9ae54", 00:11:48.317 "is_configured": true, 00:11:48.317 "data_offset": 2048, 00:11:48.317 "data_size": 63488 00:11:48.317 }, 00:11:48.317 { 00:11:48.317 "name": "BaseBdev2", 00:11:48.317 "uuid": "c3174816-0c12-535a-8a7a-b0028fee9ebe", 00:11:48.317 "is_configured": true, 00:11:48.317 "data_offset": 2048, 00:11:48.317 "data_size": 63488 00:11:48.317 } 00:11:48.317 ] 00:11:48.317 }' 00:11:48.317 04:27:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:48.317 04:27:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:48.317 04:27:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:48.577 04:27:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:48.577 04:27:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.577 04:27:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.577 04:27:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.577 04:27:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:11:48.577 04:27:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.577 04:27:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:11:48.577 04:27:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:48.577 04:27:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.577 04:27:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.577 [2024-12-13 04:27:48.429496] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:48.577 04:27:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.577 04:27:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:48.577 04:27:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.577 04:27:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:48.577 04:27:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.577 04:27:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.577 04:27:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:48.577 04:27:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.577 04:27:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.577 04:27:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.577 04:27:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.577 04:27:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.577 04:27:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.577 04:27:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.577 04:27:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.577 04:27:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.577 04:27:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.577 "name": "raid_bdev1", 00:11:48.577 "uuid": "24ca36b6-5230-4dbd-af25-0e1687da624b", 00:11:48.577 "strip_size_kb": 0, 00:11:48.577 "state": "online", 00:11:48.577 "raid_level": "raid1", 00:11:48.577 "superblock": true, 00:11:48.577 "num_base_bdevs": 2, 00:11:48.577 "num_base_bdevs_discovered": 1, 00:11:48.577 "num_base_bdevs_operational": 1, 00:11:48.577 "base_bdevs_list": [ 00:11:48.577 { 00:11:48.577 "name": null, 00:11:48.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.577 "is_configured": false, 00:11:48.577 "data_offset": 0, 00:11:48.577 "data_size": 63488 00:11:48.577 }, 00:11:48.577 { 00:11:48.577 "name": "BaseBdev2", 00:11:48.577 "uuid": "c3174816-0c12-535a-8a7a-b0028fee9ebe", 00:11:48.577 "is_configured": true, 00:11:48.577 "data_offset": 2048, 00:11:48.577 "data_size": 63488 00:11:48.577 } 00:11:48.577 ] 00:11:48.577 }' 00:11:48.577 04:27:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.577 04:27:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.147 04:27:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:49.147 04:27:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.147 04:27:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.147 [2024-12-13 04:27:48.864725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:49.147 [2024-12-13 04:27:48.865019] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:49.147 [2024-12-13 04:27:48.865093] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:49.147 [2024-12-13 04:27:48.865173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:49.147 [2024-12-13 04:27:48.873824] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caea30 00:11:49.147 04:27:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.147 04:27:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:11:49.147 [2024-12-13 04:27:48.876094] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:50.085 04:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:50.085 04:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:50.085 04:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:50.085 04:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:50.085 04:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:50.086 04:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.086 04:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.086 04:27:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.086 04:27:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.086 04:27:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.086 04:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:50.086 "name": "raid_bdev1", 00:11:50.086 "uuid": "24ca36b6-5230-4dbd-af25-0e1687da624b", 00:11:50.086 "strip_size_kb": 0, 00:11:50.086 "state": "online", 00:11:50.086 "raid_level": "raid1", 00:11:50.086 "superblock": true, 00:11:50.086 "num_base_bdevs": 2, 00:11:50.086 "num_base_bdevs_discovered": 2, 00:11:50.086 "num_base_bdevs_operational": 2, 00:11:50.086 "process": { 00:11:50.086 "type": "rebuild", 00:11:50.086 "target": "spare", 00:11:50.086 "progress": { 00:11:50.086 "blocks": 20480, 00:11:50.086 "percent": 32 00:11:50.086 } 00:11:50.086 }, 00:11:50.086 "base_bdevs_list": [ 00:11:50.086 { 00:11:50.086 "name": "spare", 00:11:50.086 "uuid": "fa8f30d3-997b-57a4-8106-5d9b17d9ae54", 00:11:50.086 "is_configured": true, 00:11:50.086 "data_offset": 2048, 00:11:50.086 "data_size": 63488 00:11:50.086 }, 00:11:50.086 { 00:11:50.086 "name": "BaseBdev2", 00:11:50.086 "uuid": "c3174816-0c12-535a-8a7a-b0028fee9ebe", 00:11:50.086 "is_configured": true, 00:11:50.086 "data_offset": 2048, 00:11:50.086 "data_size": 63488 00:11:50.086 } 00:11:50.086 ] 00:11:50.086 }' 00:11:50.086 04:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:50.086 04:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:50.086 04:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:50.086 04:27:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:50.086 04:27:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:11:50.086 04:27:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.086 04:27:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.086 [2024-12-13 04:27:50.040652] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:50.086 [2024-12-13 04:27:50.083876] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:50.086 [2024-12-13 04:27:50.084027] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:50.086 [2024-12-13 04:27:50.084074] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:50.086 [2024-12-13 04:27:50.084116] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:50.086 04:27:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.086 04:27:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:50.086 04:27:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:50.086 04:27:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:50.086 04:27:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.086 04:27:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.086 04:27:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:50.086 04:27:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.345 04:27:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.345 04:27:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.345 04:27:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.345 04:27:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.345 04:27:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.345 04:27:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.345 04:27:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.345 04:27:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.345 04:27:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.345 "name": "raid_bdev1", 00:11:50.345 "uuid": "24ca36b6-5230-4dbd-af25-0e1687da624b", 00:11:50.345 "strip_size_kb": 0, 00:11:50.345 "state": "online", 00:11:50.345 "raid_level": "raid1", 00:11:50.345 "superblock": true, 00:11:50.345 "num_base_bdevs": 2, 00:11:50.345 "num_base_bdevs_discovered": 1, 00:11:50.345 "num_base_bdevs_operational": 1, 00:11:50.345 "base_bdevs_list": [ 00:11:50.345 { 00:11:50.345 "name": null, 00:11:50.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.345 "is_configured": false, 00:11:50.345 "data_offset": 0, 00:11:50.345 "data_size": 63488 00:11:50.345 }, 00:11:50.345 { 00:11:50.345 "name": "BaseBdev2", 00:11:50.345 "uuid": "c3174816-0c12-535a-8a7a-b0028fee9ebe", 00:11:50.345 "is_configured": true, 00:11:50.345 "data_offset": 2048, 00:11:50.345 "data_size": 63488 00:11:50.345 } 00:11:50.345 ] 00:11:50.345 }' 00:11:50.345 04:27:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.345 04:27:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.604 04:27:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:50.604 04:27:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.604 04:27:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.604 [2024-12-13 04:27:50.576035] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:50.604 [2024-12-13 04:27:50.576117] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.604 [2024-12-13 04:27:50.576149] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:50.604 [2024-12-13 04:27:50.576160] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.604 [2024-12-13 04:27:50.576850] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.605 [2024-12-13 04:27:50.576918] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:50.605 [2024-12-13 04:27:50.577066] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:50.605 [2024-12-13 04:27:50.577114] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:50.605 [2024-12-13 04:27:50.577179] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:50.605 [2024-12-13 04:27:50.577253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:50.605 [2024-12-13 04:27:50.585954] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caeb00 00:11:50.605 spare 00:11:50.605 04:27:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.605 04:27:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:11:50.605 [2024-12-13 04:27:50.588167] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:51.985 04:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:51.985 04:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:51.985 04:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:51.985 04:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:51.985 04:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:51.985 04:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.985 04:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.985 04:27:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.985 04:27:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.985 04:27:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.985 04:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:51.985 "name": "raid_bdev1", 00:11:51.985 "uuid": "24ca36b6-5230-4dbd-af25-0e1687da624b", 00:11:51.985 "strip_size_kb": 0, 00:11:51.985 "state": "online", 00:11:51.985 "raid_level": "raid1", 00:11:51.985 "superblock": true, 00:11:51.985 "num_base_bdevs": 2, 00:11:51.985 "num_base_bdevs_discovered": 2, 00:11:51.985 "num_base_bdevs_operational": 2, 00:11:51.985 "process": { 00:11:51.985 "type": "rebuild", 00:11:51.985 "target": "spare", 00:11:51.985 "progress": { 00:11:51.985 "blocks": 20480, 00:11:51.985 "percent": 32 00:11:51.985 } 00:11:51.985 }, 00:11:51.985 "base_bdevs_list": [ 00:11:51.985 { 00:11:51.985 "name": "spare", 00:11:51.985 "uuid": "fa8f30d3-997b-57a4-8106-5d9b17d9ae54", 00:11:51.985 "is_configured": true, 00:11:51.985 "data_offset": 2048, 00:11:51.985 "data_size": 63488 00:11:51.985 }, 00:11:51.985 { 00:11:51.985 "name": "BaseBdev2", 00:11:51.985 "uuid": "c3174816-0c12-535a-8a7a-b0028fee9ebe", 00:11:51.985 "is_configured": true, 00:11:51.985 "data_offset": 2048, 00:11:51.985 "data_size": 63488 00:11:51.985 } 00:11:51.985 ] 00:11:51.985 }' 00:11:51.985 04:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:51.985 04:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:51.985 04:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:51.985 04:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:51.985 04:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:11:51.985 04:27:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.985 04:27:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.985 [2024-12-13 04:27:51.720654] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:51.985 [2024-12-13 04:27:51.796282] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:51.985 [2024-12-13 04:27:51.796357] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:51.985 [2024-12-13 04:27:51.796376] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:51.985 [2024-12-13 04:27:51.796389] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:51.985 04:27:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.985 04:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:51.985 04:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:51.985 04:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:51.985 04:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.985 04:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.985 04:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:51.985 04:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.985 04:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.985 04:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.985 04:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.985 04:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.985 04:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.985 04:27:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.985 04:27:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.985 04:27:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.985 04:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.985 "name": "raid_bdev1", 00:11:51.985 "uuid": "24ca36b6-5230-4dbd-af25-0e1687da624b", 00:11:51.985 "strip_size_kb": 0, 00:11:51.985 "state": "online", 00:11:51.985 "raid_level": "raid1", 00:11:51.985 "superblock": true, 00:11:51.985 "num_base_bdevs": 2, 00:11:51.985 "num_base_bdevs_discovered": 1, 00:11:51.985 "num_base_bdevs_operational": 1, 00:11:51.985 "base_bdevs_list": [ 00:11:51.985 { 00:11:51.985 "name": null, 00:11:51.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.985 "is_configured": false, 00:11:51.985 "data_offset": 0, 00:11:51.985 "data_size": 63488 00:11:51.985 }, 00:11:51.985 { 00:11:51.985 "name": "BaseBdev2", 00:11:51.985 "uuid": "c3174816-0c12-535a-8a7a-b0028fee9ebe", 00:11:51.985 "is_configured": true, 00:11:51.985 "data_offset": 2048, 00:11:51.985 "data_size": 63488 00:11:51.985 } 00:11:51.985 ] 00:11:51.985 }' 00:11:51.985 04:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.985 04:27:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.245 04:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:52.245 04:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:52.245 04:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:52.245 04:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:52.245 04:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:52.245 04:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.245 04:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.245 04:27:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.245 04:27:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.505 04:27:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.505 04:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:52.505 "name": "raid_bdev1", 00:11:52.505 "uuid": "24ca36b6-5230-4dbd-af25-0e1687da624b", 00:11:52.505 "strip_size_kb": 0, 00:11:52.505 "state": "online", 00:11:52.505 "raid_level": "raid1", 00:11:52.505 "superblock": true, 00:11:52.505 "num_base_bdevs": 2, 00:11:52.505 "num_base_bdevs_discovered": 1, 00:11:52.505 "num_base_bdevs_operational": 1, 00:11:52.505 "base_bdevs_list": [ 00:11:52.505 { 00:11:52.505 "name": null, 00:11:52.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.505 "is_configured": false, 00:11:52.505 "data_offset": 0, 00:11:52.505 "data_size": 63488 00:11:52.505 }, 00:11:52.505 { 00:11:52.505 "name": "BaseBdev2", 00:11:52.505 "uuid": "c3174816-0c12-535a-8a7a-b0028fee9ebe", 00:11:52.505 "is_configured": true, 00:11:52.505 "data_offset": 2048, 00:11:52.505 "data_size": 63488 00:11:52.505 } 00:11:52.505 ] 00:11:52.505 }' 00:11:52.505 04:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:52.505 04:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:52.505 04:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:52.505 04:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:52.505 04:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:11:52.505 04:27:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.505 04:27:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.505 04:27:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.505 04:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:52.505 04:27:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.505 04:27:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.505 [2024-12-13 04:27:52.403966] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:52.505 [2024-12-13 04:27:52.404094] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.505 [2024-12-13 04:27:52.404125] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:11:52.505 [2024-12-13 04:27:52.404140] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.505 [2024-12-13 04:27:52.404681] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.505 [2024-12-13 04:27:52.404711] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:52.505 [2024-12-13 04:27:52.404808] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:11:52.505 [2024-12-13 04:27:52.404832] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:52.505 [2024-12-13 04:27:52.404844] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:52.505 [2024-12-13 04:27:52.404862] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:11:52.505 BaseBdev1 00:11:52.505 04:27:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.505 04:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:11:53.444 04:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:53.444 04:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:53.444 04:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:53.444 04:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.444 04:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.444 04:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:53.444 04:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.444 04:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.444 04:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.444 04:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.444 04:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.444 04:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.444 04:27:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.444 04:27:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.444 04:27:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.702 04:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.702 "name": "raid_bdev1", 00:11:53.702 "uuid": "24ca36b6-5230-4dbd-af25-0e1687da624b", 00:11:53.702 "strip_size_kb": 0, 00:11:53.702 "state": "online", 00:11:53.702 "raid_level": "raid1", 00:11:53.702 "superblock": true, 00:11:53.702 "num_base_bdevs": 2, 00:11:53.702 "num_base_bdevs_discovered": 1, 00:11:53.702 "num_base_bdevs_operational": 1, 00:11:53.702 "base_bdevs_list": [ 00:11:53.702 { 00:11:53.702 "name": null, 00:11:53.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.702 "is_configured": false, 00:11:53.702 "data_offset": 0, 00:11:53.702 "data_size": 63488 00:11:53.702 }, 00:11:53.702 { 00:11:53.702 "name": "BaseBdev2", 00:11:53.702 "uuid": "c3174816-0c12-535a-8a7a-b0028fee9ebe", 00:11:53.702 "is_configured": true, 00:11:53.702 "data_offset": 2048, 00:11:53.702 "data_size": 63488 00:11:53.702 } 00:11:53.702 ] 00:11:53.702 }' 00:11:53.702 04:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.702 04:27:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.961 04:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:53.961 04:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:53.961 04:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:53.961 04:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:53.961 04:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:53.961 04:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.961 04:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.961 04:27:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.961 04:27:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.961 04:27:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.961 04:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:53.961 "name": "raid_bdev1", 00:11:53.961 "uuid": "24ca36b6-5230-4dbd-af25-0e1687da624b", 00:11:53.961 "strip_size_kb": 0, 00:11:53.961 "state": "online", 00:11:53.961 "raid_level": "raid1", 00:11:53.961 "superblock": true, 00:11:53.961 "num_base_bdevs": 2, 00:11:53.961 "num_base_bdevs_discovered": 1, 00:11:53.961 "num_base_bdevs_operational": 1, 00:11:53.961 "base_bdevs_list": [ 00:11:53.961 { 00:11:53.961 "name": null, 00:11:53.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.962 "is_configured": false, 00:11:53.962 "data_offset": 0, 00:11:53.962 "data_size": 63488 00:11:53.962 }, 00:11:53.962 { 00:11:53.962 "name": "BaseBdev2", 00:11:53.962 "uuid": "c3174816-0c12-535a-8a7a-b0028fee9ebe", 00:11:53.962 "is_configured": true, 00:11:53.962 "data_offset": 2048, 00:11:53.962 "data_size": 63488 00:11:53.962 } 00:11:53.962 ] 00:11:53.962 }' 00:11:53.962 04:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:53.962 04:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:53.962 04:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:53.962 04:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:53.962 04:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:53.962 04:27:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:11:53.962 04:27:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:53.962 04:27:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:53.962 04:27:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:53.962 04:27:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:53.962 04:27:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:53.962 04:27:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:53.962 04:27:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.962 04:27:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.962 [2024-12-13 04:27:53.965418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:53.962 [2024-12-13 04:27:53.965657] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:53.962 [2024-12-13 04:27:53.965672] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:53.962 request: 00:11:53.962 { 00:11:53.962 "base_bdev": "BaseBdev1", 00:11:53.962 "raid_bdev": "raid_bdev1", 00:11:53.962 "method": "bdev_raid_add_base_bdev", 00:11:53.962 "req_id": 1 00:11:53.962 } 00:11:53.962 Got JSON-RPC error response 00:11:53.962 response: 00:11:53.962 { 00:11:53.962 "code": -22, 00:11:53.962 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:11:53.962 } 00:11:53.962 04:27:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:53.962 04:27:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:11:53.962 04:27:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:53.962 04:27:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:53.962 04:27:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:53.962 04:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:11:55.342 04:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:55.342 04:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:55.342 04:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:55.342 04:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:55.342 04:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:55.342 04:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:55.342 04:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.342 04:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.342 04:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.342 04:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.342 04:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.342 04:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.342 04:27:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.342 04:27:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.342 04:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.342 04:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.342 "name": "raid_bdev1", 00:11:55.342 "uuid": "24ca36b6-5230-4dbd-af25-0e1687da624b", 00:11:55.342 "strip_size_kb": 0, 00:11:55.342 "state": "online", 00:11:55.342 "raid_level": "raid1", 00:11:55.342 "superblock": true, 00:11:55.342 "num_base_bdevs": 2, 00:11:55.342 "num_base_bdevs_discovered": 1, 00:11:55.342 "num_base_bdevs_operational": 1, 00:11:55.342 "base_bdevs_list": [ 00:11:55.342 { 00:11:55.342 "name": null, 00:11:55.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.342 "is_configured": false, 00:11:55.342 "data_offset": 0, 00:11:55.342 "data_size": 63488 00:11:55.342 }, 00:11:55.342 { 00:11:55.342 "name": "BaseBdev2", 00:11:55.342 "uuid": "c3174816-0c12-535a-8a7a-b0028fee9ebe", 00:11:55.342 "is_configured": true, 00:11:55.342 "data_offset": 2048, 00:11:55.342 "data_size": 63488 00:11:55.342 } 00:11:55.342 ] 00:11:55.342 }' 00:11:55.342 04:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.342 04:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.602 04:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:55.602 04:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:55.602 04:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:55.602 04:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:55.602 04:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:55.602 04:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.602 04:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.602 04:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.602 04:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.602 04:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.602 04:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:55.602 "name": "raid_bdev1", 00:11:55.602 "uuid": "24ca36b6-5230-4dbd-af25-0e1687da624b", 00:11:55.602 "strip_size_kb": 0, 00:11:55.602 "state": "online", 00:11:55.602 "raid_level": "raid1", 00:11:55.602 "superblock": true, 00:11:55.602 "num_base_bdevs": 2, 00:11:55.602 "num_base_bdevs_discovered": 1, 00:11:55.602 "num_base_bdevs_operational": 1, 00:11:55.602 "base_bdevs_list": [ 00:11:55.602 { 00:11:55.602 "name": null, 00:11:55.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.602 "is_configured": false, 00:11:55.602 "data_offset": 0, 00:11:55.602 "data_size": 63488 00:11:55.602 }, 00:11:55.602 { 00:11:55.602 "name": "BaseBdev2", 00:11:55.602 "uuid": "c3174816-0c12-535a-8a7a-b0028fee9ebe", 00:11:55.602 "is_configured": true, 00:11:55.602 "data_offset": 2048, 00:11:55.602 "data_size": 63488 00:11:55.602 } 00:11:55.602 ] 00:11:55.602 }' 00:11:55.602 04:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:55.602 04:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:55.602 04:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:55.602 04:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:55.602 04:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 88130 00:11:55.602 04:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 88130 ']' 00:11:55.602 04:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 88130 00:11:55.602 04:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:55.602 04:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:55.602 04:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88130 00:11:55.602 killing process with pid 88130 00:11:55.602 Received shutdown signal, test time was about 60.000000 seconds 00:11:55.602 00:11:55.602 Latency(us) 00:11:55.602 [2024-12-13T04:27:55.617Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:55.602 [2024-12-13T04:27:55.617Z] =================================================================================================================== 00:11:55.602 [2024-12-13T04:27:55.617Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:55.602 04:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:55.602 04:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:55.602 04:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88130' 00:11:55.602 04:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 88130 00:11:55.602 [2024-12-13 04:27:55.605732] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:55.602 04:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 88130 00:11:55.602 [2024-12-13 04:27:55.605893] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:55.602 [2024-12-13 04:27:55.605963] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:55.602 [2024-12-13 04:27:55.605974] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:11:55.862 [2024-12-13 04:27:55.666691] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:56.122 04:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:11:56.122 ************************************ 00:11:56.122 END TEST raid_rebuild_test_sb 00:11:56.122 ************************************ 00:11:56.122 00:11:56.122 real 0m21.394s 00:11:56.122 user 0m26.048s 00:11:56.122 sys 0m3.907s 00:11:56.122 04:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.122 04:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.122 04:27:56 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:11:56.122 04:27:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:56.122 04:27:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.122 04:27:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:56.122 ************************************ 00:11:56.122 START TEST raid_rebuild_test_io 00:11:56.122 ************************************ 00:11:56.122 04:27:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:11:56.122 04:27:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:56.122 04:27:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:56.122 04:27:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:56.122 04:27:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:11:56.122 04:27:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:56.122 04:27:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:56.122 04:27:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:56.122 04:27:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:56.122 04:27:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:56.122 04:27:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:56.122 04:27:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:56.122 04:27:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:56.122 04:27:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:56.122 04:27:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:56.122 04:27:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:56.122 04:27:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:56.122 04:27:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:56.122 04:27:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:56.122 04:27:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:56.122 04:27:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:56.122 04:27:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:56.122 04:27:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:56.122 04:27:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:56.122 04:27:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=88839 00:11:56.122 04:27:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:56.122 04:27:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 88839 00:11:56.122 04:27:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 88839 ']' 00:11:56.122 04:27:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.122 04:27:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:56.122 04:27:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.122 04:27:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:56.122 04:27:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:56.382 [2024-12-13 04:27:56.161810] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:11:56.382 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:56.382 Zero copy mechanism will not be used. 00:11:56.382 [2024-12-13 04:27:56.162315] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88839 ] 00:11:56.382 [2024-12-13 04:27:56.295627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.382 [2024-12-13 04:27:56.333721] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.641 [2024-12-13 04:27:56.409608] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:56.641 [2024-12-13 04:27:56.409648] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:57.211 BaseBdev1_malloc 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:57.211 [2024-12-13 04:27:57.038682] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:57.211 [2024-12-13 04:27:57.038762] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.211 [2024-12-13 04:27:57.038796] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:11:57.211 [2024-12-13 04:27:57.038810] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.211 [2024-12-13 04:27:57.041273] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.211 [2024-12-13 04:27:57.041330] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:57.211 BaseBdev1 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:57.211 BaseBdev2_malloc 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:57.211 [2024-12-13 04:27:57.073316] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:57.211 [2024-12-13 04:27:57.073494] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.211 [2024-12-13 04:27:57.073528] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:57.211 [2024-12-13 04:27:57.073539] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.211 [2024-12-13 04:27:57.075990] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.211 [2024-12-13 04:27:57.076036] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:57.211 BaseBdev2 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:57.211 spare_malloc 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:57.211 spare_delay 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:57.211 [2024-12-13 04:27:57.119783] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:57.211 [2024-12-13 04:27:57.119840] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.211 [2024-12-13 04:27:57.119862] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:57.211 [2024-12-13 04:27:57.119872] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.211 [2024-12-13 04:27:57.122330] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.211 [2024-12-13 04:27:57.122372] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:57.211 spare 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:57.211 [2024-12-13 04:27:57.131811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:57.211 [2024-12-13 04:27:57.133976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:57.211 [2024-12-13 04:27:57.134082] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:11:57.211 [2024-12-13 04:27:57.134094] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:57.211 [2024-12-13 04:27:57.134425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:11:57.211 [2024-12-13 04:27:57.134609] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:11:57.211 [2024-12-13 04:27:57.134625] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:11:57.211 [2024-12-13 04:27:57.134759] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.211 "name": "raid_bdev1", 00:11:57.211 "uuid": "8ae9851d-82bf-4f38-905c-aba9b397c68c", 00:11:57.211 "strip_size_kb": 0, 00:11:57.211 "state": "online", 00:11:57.211 "raid_level": "raid1", 00:11:57.211 "superblock": false, 00:11:57.211 "num_base_bdevs": 2, 00:11:57.211 "num_base_bdevs_discovered": 2, 00:11:57.211 "num_base_bdevs_operational": 2, 00:11:57.211 "base_bdevs_list": [ 00:11:57.211 { 00:11:57.211 "name": "BaseBdev1", 00:11:57.211 "uuid": "06411643-7f26-5641-afa0-395b673cb9e9", 00:11:57.211 "is_configured": true, 00:11:57.211 "data_offset": 0, 00:11:57.211 "data_size": 65536 00:11:57.211 }, 00:11:57.211 { 00:11:57.211 "name": "BaseBdev2", 00:11:57.211 "uuid": "89fccd4a-2f20-5c3d-9d38-1d8ae032fd53", 00:11:57.211 "is_configured": true, 00:11:57.211 "data_offset": 0, 00:11:57.211 "data_size": 65536 00:11:57.211 } 00:11:57.211 ] 00:11:57.211 }' 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.211 04:27:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:57.780 04:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:57.780 04:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:57.780 04:27:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.780 04:27:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:57.780 [2024-12-13 04:27:57.595249] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:57.780 04:27:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.780 04:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:57.780 04:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:57.780 04:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.780 04:27:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.780 04:27:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:57.780 04:27:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.780 04:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:57.780 04:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:11:57.780 04:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:57.780 04:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:57.780 04:27:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.780 04:27:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:57.780 [2024-12-13 04:27:57.682841] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:57.780 04:27:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.780 04:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:57.780 04:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:57.780 04:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:57.780 04:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.780 04:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.780 04:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:57.780 04:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.780 04:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.780 04:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.780 04:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.780 04:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.780 04:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.780 04:27:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.780 04:27:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:57.780 04:27:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.780 04:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.780 "name": "raid_bdev1", 00:11:57.780 "uuid": "8ae9851d-82bf-4f38-905c-aba9b397c68c", 00:11:57.780 "strip_size_kb": 0, 00:11:57.780 "state": "online", 00:11:57.780 "raid_level": "raid1", 00:11:57.780 "superblock": false, 00:11:57.780 "num_base_bdevs": 2, 00:11:57.780 "num_base_bdevs_discovered": 1, 00:11:57.780 "num_base_bdevs_operational": 1, 00:11:57.780 "base_bdevs_list": [ 00:11:57.780 { 00:11:57.780 "name": null, 00:11:57.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.780 "is_configured": false, 00:11:57.780 "data_offset": 0, 00:11:57.780 "data_size": 65536 00:11:57.780 }, 00:11:57.780 { 00:11:57.780 "name": "BaseBdev2", 00:11:57.780 "uuid": "89fccd4a-2f20-5c3d-9d38-1d8ae032fd53", 00:11:57.780 "is_configured": true, 00:11:57.780 "data_offset": 0, 00:11:57.780 "data_size": 65536 00:11:57.780 } 00:11:57.780 ] 00:11:57.780 }' 00:11:57.780 04:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.780 04:27:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:57.780 [2024-12-13 04:27:57.779728] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:11:57.780 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:57.780 Zero copy mechanism will not be used. 00:11:57.780 Running I/O for 60 seconds... 00:11:58.349 04:27:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:58.349 04:27:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.349 04:27:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:58.349 [2024-12-13 04:27:58.114649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:58.349 04:27:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.349 04:27:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:58.349 [2024-12-13 04:27:58.159385] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:11:58.349 [2024-12-13 04:27:58.161837] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:58.349 [2024-12-13 04:27:58.269591] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:58.349 [2024-12-13 04:27:58.270434] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:58.609 [2024-12-13 04:27:58.486465] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:58.609 [2024-12-13 04:27:58.486940] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:59.127 186.00 IOPS, 558.00 MiB/s [2024-12-13T04:27:59.142Z] [2024-12-13 04:27:58.949271] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:59.385 04:27:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:59.385 04:27:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:59.386 04:27:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:59.386 04:27:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:59.386 04:27:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:59.386 04:27:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.386 04:27:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.386 04:27:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.386 04:27:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.386 04:27:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.386 04:27:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:59.386 "name": "raid_bdev1", 00:11:59.386 "uuid": "8ae9851d-82bf-4f38-905c-aba9b397c68c", 00:11:59.386 "strip_size_kb": 0, 00:11:59.386 "state": "online", 00:11:59.386 "raid_level": "raid1", 00:11:59.386 "superblock": false, 00:11:59.386 "num_base_bdevs": 2, 00:11:59.386 "num_base_bdevs_discovered": 2, 00:11:59.386 "num_base_bdevs_operational": 2, 00:11:59.386 "process": { 00:11:59.386 "type": "rebuild", 00:11:59.386 "target": "spare", 00:11:59.386 "progress": { 00:11:59.386 "blocks": 10240, 00:11:59.386 "percent": 15 00:11:59.386 } 00:11:59.386 }, 00:11:59.386 "base_bdevs_list": [ 00:11:59.386 { 00:11:59.386 "name": "spare", 00:11:59.386 "uuid": "faaae1c7-a3de-5a46-84f4-e1fe2474fe58", 00:11:59.386 "is_configured": true, 00:11:59.386 "data_offset": 0, 00:11:59.386 "data_size": 65536 00:11:59.386 }, 00:11:59.386 { 00:11:59.386 "name": "BaseBdev2", 00:11:59.386 "uuid": "89fccd4a-2f20-5c3d-9d38-1d8ae032fd53", 00:11:59.386 "is_configured": true, 00:11:59.386 "data_offset": 0, 00:11:59.386 "data_size": 65536 00:11:59.386 } 00:11:59.386 ] 00:11:59.386 }' 00:11:59.386 04:27:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:59.386 04:27:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:59.386 04:27:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:59.386 [2024-12-13 04:27:59.299583] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:59.386 04:27:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:59.386 04:27:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:59.386 04:27:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.386 04:27:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.386 [2024-12-13 04:27:59.326865] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:59.645 [2024-12-13 04:27:59.409466] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:59.645 [2024-12-13 04:27:59.519676] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:59.645 [2024-12-13 04:27:59.526954] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:59.645 [2024-12-13 04:27:59.526994] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:59.645 [2024-12-13 04:27:59.527011] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:59.645 [2024-12-13 04:27:59.543777] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000026d0 00:11:59.645 04:27:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.645 04:27:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:59.645 04:27:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:59.645 04:27:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:59.645 04:27:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.645 04:27:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.645 04:27:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:59.645 04:27:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.645 04:27:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.645 04:27:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.645 04:27:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.645 04:27:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.645 04:27:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.645 04:27:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.645 04:27:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.645 04:27:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.645 04:27:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.645 "name": "raid_bdev1", 00:11:59.645 "uuid": "8ae9851d-82bf-4f38-905c-aba9b397c68c", 00:11:59.645 "strip_size_kb": 0, 00:11:59.645 "state": "online", 00:11:59.645 "raid_level": "raid1", 00:11:59.645 "superblock": false, 00:11:59.645 "num_base_bdevs": 2, 00:11:59.645 "num_base_bdevs_discovered": 1, 00:11:59.645 "num_base_bdevs_operational": 1, 00:11:59.645 "base_bdevs_list": [ 00:11:59.645 { 00:11:59.645 "name": null, 00:11:59.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.645 "is_configured": false, 00:11:59.645 "data_offset": 0, 00:11:59.645 "data_size": 65536 00:11:59.645 }, 00:11:59.645 { 00:11:59.645 "name": "BaseBdev2", 00:11:59.645 "uuid": "89fccd4a-2f20-5c3d-9d38-1d8ae032fd53", 00:11:59.645 "is_configured": true, 00:11:59.645 "data_offset": 0, 00:11:59.645 "data_size": 65536 00:11:59.645 } 00:11:59.645 ] 00:11:59.645 }' 00:11:59.645 04:27:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.645 04:27:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:00.164 157.50 IOPS, 472.50 MiB/s [2024-12-13T04:28:00.179Z] 04:27:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:00.164 04:27:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:00.164 04:27:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:00.164 04:27:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:00.164 04:27:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:00.164 04:27:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.164 04:27:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.164 04:27:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.164 04:27:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:00.164 04:28:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.164 04:28:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:00.164 "name": "raid_bdev1", 00:12:00.164 "uuid": "8ae9851d-82bf-4f38-905c-aba9b397c68c", 00:12:00.164 "strip_size_kb": 0, 00:12:00.164 "state": "online", 00:12:00.164 "raid_level": "raid1", 00:12:00.164 "superblock": false, 00:12:00.164 "num_base_bdevs": 2, 00:12:00.164 "num_base_bdevs_discovered": 1, 00:12:00.164 "num_base_bdevs_operational": 1, 00:12:00.164 "base_bdevs_list": [ 00:12:00.164 { 00:12:00.164 "name": null, 00:12:00.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.164 "is_configured": false, 00:12:00.164 "data_offset": 0, 00:12:00.164 "data_size": 65536 00:12:00.164 }, 00:12:00.164 { 00:12:00.164 "name": "BaseBdev2", 00:12:00.164 "uuid": "89fccd4a-2f20-5c3d-9d38-1d8ae032fd53", 00:12:00.164 "is_configured": true, 00:12:00.164 "data_offset": 0, 00:12:00.164 "data_size": 65536 00:12:00.164 } 00:12:00.164 ] 00:12:00.164 }' 00:12:00.164 04:28:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:00.164 04:28:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:00.164 04:28:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:00.164 04:28:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:00.164 04:28:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:00.164 04:28:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.164 04:28:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:00.164 [2024-12-13 04:28:00.120088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:00.164 04:28:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.164 04:28:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:00.164 [2024-12-13 04:28:00.162955] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:12:00.165 [2024-12-13 04:28:00.165334] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:00.424 [2024-12-13 04:28:00.277558] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:00.424 [2024-12-13 04:28:00.278306] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:00.683 [2024-12-13 04:28:00.498502] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:00.683 [2024-12-13 04:28:00.498982] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:00.943 174.33 IOPS, 523.00 MiB/s [2024-12-13T04:28:00.958Z] [2024-12-13 04:28:00.822922] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:01.202 [2024-12-13 04:28:01.032373] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:01.202 04:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:01.202 04:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:01.202 04:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:01.202 04:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:01.202 04:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:01.202 04:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.202 04:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.202 04:28:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.202 04:28:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:01.202 04:28:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.202 04:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:01.202 "name": "raid_bdev1", 00:12:01.202 "uuid": "8ae9851d-82bf-4f38-905c-aba9b397c68c", 00:12:01.202 "strip_size_kb": 0, 00:12:01.202 "state": "online", 00:12:01.202 "raid_level": "raid1", 00:12:01.202 "superblock": false, 00:12:01.202 "num_base_bdevs": 2, 00:12:01.202 "num_base_bdevs_discovered": 2, 00:12:01.202 "num_base_bdevs_operational": 2, 00:12:01.202 "process": { 00:12:01.202 "type": "rebuild", 00:12:01.202 "target": "spare", 00:12:01.202 "progress": { 00:12:01.202 "blocks": 10240, 00:12:01.202 "percent": 15 00:12:01.202 } 00:12:01.202 }, 00:12:01.202 "base_bdevs_list": [ 00:12:01.202 { 00:12:01.202 "name": "spare", 00:12:01.202 "uuid": "faaae1c7-a3de-5a46-84f4-e1fe2474fe58", 00:12:01.202 "is_configured": true, 00:12:01.202 "data_offset": 0, 00:12:01.202 "data_size": 65536 00:12:01.202 }, 00:12:01.202 { 00:12:01.202 "name": "BaseBdev2", 00:12:01.202 "uuid": "89fccd4a-2f20-5c3d-9d38-1d8ae032fd53", 00:12:01.202 "is_configured": true, 00:12:01.202 "data_offset": 0, 00:12:01.202 "data_size": 65536 00:12:01.202 } 00:12:01.202 ] 00:12:01.202 }' 00:12:01.202 04:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:01.462 04:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:01.462 04:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:01.462 04:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:01.462 04:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:01.462 04:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:01.462 04:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:01.462 04:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:01.462 04:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=330 00:12:01.462 04:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:01.462 04:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:01.462 04:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:01.462 04:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:01.462 04:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:01.462 04:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:01.462 04:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.462 04:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.462 04:28:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.462 04:28:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:01.462 04:28:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.462 04:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:01.462 "name": "raid_bdev1", 00:12:01.462 "uuid": "8ae9851d-82bf-4f38-905c-aba9b397c68c", 00:12:01.462 "strip_size_kb": 0, 00:12:01.462 "state": "online", 00:12:01.462 "raid_level": "raid1", 00:12:01.462 "superblock": false, 00:12:01.462 "num_base_bdevs": 2, 00:12:01.462 "num_base_bdevs_discovered": 2, 00:12:01.462 "num_base_bdevs_operational": 2, 00:12:01.462 "process": { 00:12:01.462 "type": "rebuild", 00:12:01.462 "target": "spare", 00:12:01.462 "progress": { 00:12:01.462 "blocks": 12288, 00:12:01.462 "percent": 18 00:12:01.462 } 00:12:01.462 }, 00:12:01.462 "base_bdevs_list": [ 00:12:01.462 { 00:12:01.462 "name": "spare", 00:12:01.462 "uuid": "faaae1c7-a3de-5a46-84f4-e1fe2474fe58", 00:12:01.462 "is_configured": true, 00:12:01.462 "data_offset": 0, 00:12:01.462 "data_size": 65536 00:12:01.462 }, 00:12:01.462 { 00:12:01.462 "name": "BaseBdev2", 00:12:01.462 "uuid": "89fccd4a-2f20-5c3d-9d38-1d8ae032fd53", 00:12:01.462 "is_configured": true, 00:12:01.462 "data_offset": 0, 00:12:01.462 "data_size": 65536 00:12:01.462 } 00:12:01.462 ] 00:12:01.462 }' 00:12:01.462 04:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:01.462 [2024-12-13 04:28:01.369344] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:01.462 04:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:01.462 04:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:01.462 04:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:01.462 04:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:01.722 [2024-12-13 04:28:01.478941] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:01.982 149.50 IOPS, 448.50 MiB/s [2024-12-13T04:28:01.997Z] [2024-12-13 04:28:01.908749] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:02.551 [2024-12-13 04:28:02.362658] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:02.551 [2024-12-13 04:28:02.363046] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:02.551 04:28:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:02.551 04:28:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:02.551 04:28:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:02.551 04:28:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:02.551 04:28:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:02.551 04:28:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:02.551 04:28:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.551 04:28:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.551 04:28:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.551 04:28:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:02.551 04:28:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.551 04:28:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:02.551 "name": "raid_bdev1", 00:12:02.551 "uuid": "8ae9851d-82bf-4f38-905c-aba9b397c68c", 00:12:02.551 "strip_size_kb": 0, 00:12:02.551 "state": "online", 00:12:02.551 "raid_level": "raid1", 00:12:02.551 "superblock": false, 00:12:02.551 "num_base_bdevs": 2, 00:12:02.551 "num_base_bdevs_discovered": 2, 00:12:02.551 "num_base_bdevs_operational": 2, 00:12:02.551 "process": { 00:12:02.551 "type": "rebuild", 00:12:02.551 "target": "spare", 00:12:02.551 "progress": { 00:12:02.551 "blocks": 28672, 00:12:02.551 "percent": 43 00:12:02.551 } 00:12:02.551 }, 00:12:02.551 "base_bdevs_list": [ 00:12:02.551 { 00:12:02.551 "name": "spare", 00:12:02.551 "uuid": "faaae1c7-a3de-5a46-84f4-e1fe2474fe58", 00:12:02.551 "is_configured": true, 00:12:02.551 "data_offset": 0, 00:12:02.551 "data_size": 65536 00:12:02.551 }, 00:12:02.551 { 00:12:02.551 "name": "BaseBdev2", 00:12:02.551 "uuid": "89fccd4a-2f20-5c3d-9d38-1d8ae032fd53", 00:12:02.551 "is_configured": true, 00:12:02.551 "data_offset": 0, 00:12:02.551 "data_size": 65536 00:12:02.551 } 00:12:02.551 ] 00:12:02.551 }' 00:12:02.551 04:28:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:02.551 04:28:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:02.551 04:28:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:02.551 04:28:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:02.551 04:28:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:02.811 [2024-12-13 04:28:02.682493] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:03.380 131.80 IOPS, 395.40 MiB/s [2024-12-13T04:28:03.395Z] [2024-12-13 04:28:03.189802] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:03.380 [2024-12-13 04:28:03.190341] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:03.639 04:28:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:03.639 04:28:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:03.639 04:28:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:03.639 04:28:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:03.639 04:28:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:03.639 04:28:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:03.639 04:28:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.639 04:28:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.639 04:28:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.639 04:28:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:03.639 04:28:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.639 04:28:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:03.639 "name": "raid_bdev1", 00:12:03.639 "uuid": "8ae9851d-82bf-4f38-905c-aba9b397c68c", 00:12:03.639 "strip_size_kb": 0, 00:12:03.639 "state": "online", 00:12:03.639 "raid_level": "raid1", 00:12:03.639 "superblock": false, 00:12:03.639 "num_base_bdevs": 2, 00:12:03.639 "num_base_bdevs_discovered": 2, 00:12:03.639 "num_base_bdevs_operational": 2, 00:12:03.639 "process": { 00:12:03.639 "type": "rebuild", 00:12:03.639 "target": "spare", 00:12:03.639 "progress": { 00:12:03.639 "blocks": 45056, 00:12:03.639 "percent": 68 00:12:03.639 } 00:12:03.639 }, 00:12:03.639 "base_bdevs_list": [ 00:12:03.639 { 00:12:03.639 "name": "spare", 00:12:03.639 "uuid": "faaae1c7-a3de-5a46-84f4-e1fe2474fe58", 00:12:03.639 "is_configured": true, 00:12:03.639 "data_offset": 0, 00:12:03.639 "data_size": 65536 00:12:03.639 }, 00:12:03.639 { 00:12:03.639 "name": "BaseBdev2", 00:12:03.639 "uuid": "89fccd4a-2f20-5c3d-9d38-1d8ae032fd53", 00:12:03.639 "is_configured": true, 00:12:03.639 "data_offset": 0, 00:12:03.639 "data_size": 65536 00:12:03.639 } 00:12:03.639 ] 00:12:03.639 }' 00:12:03.639 04:28:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:03.899 04:28:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:03.899 04:28:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:03.899 04:28:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:03.899 04:28:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:03.899 120.17 IOPS, 360.50 MiB/s [2024-12-13T04:28:03.914Z] [2024-12-13 04:28:03.842379] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:12:04.837 [2024-12-13 04:28:04.620670] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:04.837 04:28:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:04.837 04:28:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:04.837 04:28:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:04.837 04:28:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:04.837 04:28:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:04.837 04:28:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:04.837 [2024-12-13 04:28:04.720509] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:04.837 04:28:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.837 04:28:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.837 04:28:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.837 [2024-12-13 04:28:04.722990] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:04.837 04:28:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:04.837 04:28:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.837 04:28:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:04.837 "name": "raid_bdev1", 00:12:04.837 "uuid": "8ae9851d-82bf-4f38-905c-aba9b397c68c", 00:12:04.837 "strip_size_kb": 0, 00:12:04.837 "state": "online", 00:12:04.837 "raid_level": "raid1", 00:12:04.837 "superblock": false, 00:12:04.837 "num_base_bdevs": 2, 00:12:04.837 "num_base_bdevs_discovered": 2, 00:12:04.837 "num_base_bdevs_operational": 2, 00:12:04.837 "base_bdevs_list": [ 00:12:04.837 { 00:12:04.837 "name": "spare", 00:12:04.837 "uuid": "faaae1c7-a3de-5a46-84f4-e1fe2474fe58", 00:12:04.837 "is_configured": true, 00:12:04.837 "data_offset": 0, 00:12:04.837 "data_size": 65536 00:12:04.837 }, 00:12:04.837 { 00:12:04.837 "name": "BaseBdev2", 00:12:04.837 "uuid": "89fccd4a-2f20-5c3d-9d38-1d8ae032fd53", 00:12:04.837 "is_configured": true, 00:12:04.837 "data_offset": 0, 00:12:04.837 "data_size": 65536 00:12:04.837 } 00:12:04.837 ] 00:12:04.837 }' 00:12:04.837 04:28:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:04.837 108.57 IOPS, 325.71 MiB/s [2024-12-13T04:28:04.852Z] 04:28:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:04.837 04:28:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:05.098 04:28:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:05.098 04:28:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:12:05.098 04:28:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:05.098 04:28:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:05.098 04:28:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:05.098 04:28:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:05.098 04:28:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:05.098 04:28:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.098 04:28:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.098 04:28:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.098 04:28:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:05.098 04:28:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.098 04:28:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:05.098 "name": "raid_bdev1", 00:12:05.098 "uuid": "8ae9851d-82bf-4f38-905c-aba9b397c68c", 00:12:05.098 "strip_size_kb": 0, 00:12:05.098 "state": "online", 00:12:05.098 "raid_level": "raid1", 00:12:05.098 "superblock": false, 00:12:05.098 "num_base_bdevs": 2, 00:12:05.098 "num_base_bdevs_discovered": 2, 00:12:05.098 "num_base_bdevs_operational": 2, 00:12:05.098 "base_bdevs_list": [ 00:12:05.098 { 00:12:05.098 "name": "spare", 00:12:05.098 "uuid": "faaae1c7-a3de-5a46-84f4-e1fe2474fe58", 00:12:05.098 "is_configured": true, 00:12:05.098 "data_offset": 0, 00:12:05.098 "data_size": 65536 00:12:05.098 }, 00:12:05.098 { 00:12:05.098 "name": "BaseBdev2", 00:12:05.098 "uuid": "89fccd4a-2f20-5c3d-9d38-1d8ae032fd53", 00:12:05.098 "is_configured": true, 00:12:05.098 "data_offset": 0, 00:12:05.098 "data_size": 65536 00:12:05.098 } 00:12:05.098 ] 00:12:05.098 }' 00:12:05.098 04:28:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:05.098 04:28:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:05.098 04:28:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:05.098 04:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:05.098 04:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:05.098 04:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:05.098 04:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:05.098 04:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.098 04:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.098 04:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:05.098 04:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.098 04:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.098 04:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.098 04:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.098 04:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.098 04:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.098 04:28:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.098 04:28:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:05.098 04:28:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.098 04:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.098 "name": "raid_bdev1", 00:12:05.098 "uuid": "8ae9851d-82bf-4f38-905c-aba9b397c68c", 00:12:05.098 "strip_size_kb": 0, 00:12:05.098 "state": "online", 00:12:05.098 "raid_level": "raid1", 00:12:05.098 "superblock": false, 00:12:05.098 "num_base_bdevs": 2, 00:12:05.098 "num_base_bdevs_discovered": 2, 00:12:05.098 "num_base_bdevs_operational": 2, 00:12:05.098 "base_bdevs_list": [ 00:12:05.098 { 00:12:05.098 "name": "spare", 00:12:05.098 "uuid": "faaae1c7-a3de-5a46-84f4-e1fe2474fe58", 00:12:05.098 "is_configured": true, 00:12:05.098 "data_offset": 0, 00:12:05.098 "data_size": 65536 00:12:05.098 }, 00:12:05.098 { 00:12:05.098 "name": "BaseBdev2", 00:12:05.098 "uuid": "89fccd4a-2f20-5c3d-9d38-1d8ae032fd53", 00:12:05.098 "is_configured": true, 00:12:05.098 "data_offset": 0, 00:12:05.098 "data_size": 65536 00:12:05.098 } 00:12:05.098 ] 00:12:05.098 }' 00:12:05.098 04:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.098 04:28:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:05.668 04:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:05.668 04:28:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.668 04:28:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:05.668 [2024-12-13 04:28:05.441216] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:05.668 [2024-12-13 04:28:05.441355] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:05.668 00:12:05.668 Latency(us) 00:12:05.668 [2024-12-13T04:28:05.683Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:05.668 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:05.668 raid_bdev1 : 7.71 100.97 302.90 0.00 0.00 13534.35 280.82 114473.36 00:12:05.668 [2024-12-13T04:28:05.683Z] =================================================================================================================== 00:12:05.668 [2024-12-13T04:28:05.683Z] Total : 100.97 302.90 0.00 0.00 13534.35 280.82 114473.36 00:12:05.668 [2024-12-13 04:28:05.476811] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:05.668 [2024-12-13 04:28:05.476938] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:05.668 [2024-12-13 04:28:05.477050] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:05.668 [2024-12-13 04:28:05.477122] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:12:05.668 { 00:12:05.668 "results": [ 00:12:05.668 { 00:12:05.668 "job": "raid_bdev1", 00:12:05.668 "core_mask": "0x1", 00:12:05.668 "workload": "randrw", 00:12:05.668 "percentage": 50, 00:12:05.668 "status": "finished", 00:12:05.668 "queue_depth": 2, 00:12:05.668 "io_size": 3145728, 00:12:05.668 "runtime": 7.705566, 00:12:05.668 "iops": 100.96597706125677, 00:12:05.668 "mibps": 302.8979311837703, 00:12:05.668 "io_failed": 0, 00:12:05.668 "io_timeout": 0, 00:12:05.668 "avg_latency_us": 13534.349654808544, 00:12:05.668 "min_latency_us": 280.8174672489083, 00:12:05.668 "max_latency_us": 114473.36244541485 00:12:05.668 } 00:12:05.668 ], 00:12:05.668 "core_count": 1 00:12:05.668 } 00:12:05.668 04:28:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.668 04:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.668 04:28:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.668 04:28:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:05.668 04:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:05.668 04:28:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.668 04:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:05.668 04:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:05.668 04:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:05.668 04:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:05.668 04:28:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:05.668 04:28:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:05.668 04:28:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:05.668 04:28:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:05.668 04:28:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:05.668 04:28:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:05.668 04:28:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:05.668 04:28:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:05.668 04:28:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:05.937 /dev/nbd0 00:12:05.937 04:28:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:05.937 04:28:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:05.937 04:28:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:05.937 04:28:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:05.937 04:28:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:05.937 04:28:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:05.937 04:28:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:05.937 04:28:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:05.937 04:28:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:05.937 04:28:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:05.937 04:28:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:05.937 1+0 records in 00:12:05.937 1+0 records out 00:12:05.937 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346441 s, 11.8 MB/s 00:12:05.937 04:28:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:05.937 04:28:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:05.937 04:28:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:05.937 04:28:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:05.937 04:28:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:05.937 04:28:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:05.937 04:28:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:05.937 04:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:05.937 04:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:05.937 04:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:05.937 04:28:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:05.937 04:28:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:05.937 04:28:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:05.937 04:28:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:05.937 04:28:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:05.937 04:28:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:05.937 04:28:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:05.937 04:28:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:05.937 04:28:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:06.209 /dev/nbd1 00:12:06.209 04:28:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:06.209 04:28:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:06.209 04:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:06.209 04:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:06.209 04:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:06.209 04:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:06.209 04:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:06.209 04:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:06.209 04:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:06.209 04:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:06.209 04:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:06.209 1+0 records in 00:12:06.209 1+0 records out 00:12:06.209 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000515898 s, 7.9 MB/s 00:12:06.209 04:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:06.209 04:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:06.209 04:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:06.209 04:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:06.209 04:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:06.209 04:28:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:06.209 04:28:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:06.209 04:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:06.209 04:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:06.209 04:28:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:06.209 04:28:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:06.209 04:28:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:06.209 04:28:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:06.209 04:28:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:06.209 04:28:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:06.477 04:28:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:06.477 04:28:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:06.477 04:28:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:06.477 04:28:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:06.477 04:28:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:06.477 04:28:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:06.477 04:28:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:06.477 04:28:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:06.477 04:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:06.477 04:28:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:06.477 04:28:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:06.477 04:28:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:06.477 04:28:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:06.477 04:28:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:06.477 04:28:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:06.737 04:28:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:06.737 04:28:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:06.737 04:28:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:06.737 04:28:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:06.737 04:28:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:06.737 04:28:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:06.737 04:28:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:06.737 04:28:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:06.737 04:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:06.737 04:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 88839 00:12:06.737 04:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 88839 ']' 00:12:06.737 04:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 88839 00:12:06.737 04:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:12:06.737 04:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:06.737 04:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88839 00:12:06.737 04:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:06.737 killing process with pid 88839 00:12:06.737 Received shutdown signal, test time was about 8.825019 seconds 00:12:06.737 00:12:06.737 Latency(us) 00:12:06.737 [2024-12-13T04:28:06.752Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:06.737 [2024-12-13T04:28:06.752Z] =================================================================================================================== 00:12:06.737 [2024-12-13T04:28:06.752Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:06.737 04:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:06.737 04:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88839' 00:12:06.738 04:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 88839 00:12:06.738 [2024-12-13 04:28:06.590343] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:06.738 04:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 88839 00:12:06.738 [2024-12-13 04:28:06.640117] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:06.998 04:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:06.998 ************************************ 00:12:06.998 END TEST raid_rebuild_test_io 00:12:06.998 ************************************ 00:12:06.998 00:12:06.998 real 0m10.895s 00:12:06.998 user 0m13.865s 00:12:06.998 sys 0m1.532s 00:12:06.998 04:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:06.998 04:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:07.258 04:28:07 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:12:07.258 04:28:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:07.258 04:28:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:07.258 04:28:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:07.258 ************************************ 00:12:07.258 START TEST raid_rebuild_test_sb_io 00:12:07.258 ************************************ 00:12:07.258 04:28:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:12:07.258 04:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:07.258 04:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:07.258 04:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:07.258 04:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:07.258 04:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:07.258 04:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:07.258 04:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:07.258 04:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:07.258 04:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:07.258 04:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:07.258 04:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:07.258 04:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:07.258 04:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:07.258 04:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:07.258 04:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:07.258 04:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:07.258 04:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:07.258 04:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:07.258 04:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:07.258 04:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:07.258 04:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:07.258 04:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:07.258 04:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:07.258 04:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:07.258 04:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=89205 00:12:07.258 04:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:07.258 04:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 89205 00:12:07.258 04:28:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 89205 ']' 00:12:07.258 04:28:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.258 04:28:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:07.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.258 04:28:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.258 04:28:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:07.258 04:28:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:07.258 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:07.258 Zero copy mechanism will not be used. 00:12:07.258 [2024-12-13 04:28:07.133403] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:12:07.258 [2024-12-13 04:28:07.133538] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89205 ] 00:12:07.518 [2024-12-13 04:28:07.287165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.518 [2024-12-13 04:28:07.325609] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.518 [2024-12-13 04:28:07.401343] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:07.518 [2024-12-13 04:28:07.401493] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:08.089 04:28:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:08.089 04:28:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:12:08.089 04:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:08.089 04:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:08.089 04:28:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.089 04:28:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.089 BaseBdev1_malloc 00:12:08.089 04:28:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.089 04:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:08.089 04:28:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.089 04:28:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.089 [2024-12-13 04:28:07.987579] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:08.089 [2024-12-13 04:28:07.987658] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.089 [2024-12-13 04:28:07.987690] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:12:08.089 [2024-12-13 04:28:07.987705] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.089 [2024-12-13 04:28:07.990175] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.089 [2024-12-13 04:28:07.990218] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:08.089 BaseBdev1 00:12:08.089 04:28:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.089 04:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:08.089 04:28:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:08.089 04:28:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.089 04:28:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.089 BaseBdev2_malloc 00:12:08.089 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.089 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:08.089 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.089 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.089 [2024-12-13 04:28:08.022210] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:08.089 [2024-12-13 04:28:08.022273] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.089 [2024-12-13 04:28:08.022301] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:08.089 [2024-12-13 04:28:08.022311] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.089 [2024-12-13 04:28:08.024726] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.089 [2024-12-13 04:28:08.024772] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:08.089 BaseBdev2 00:12:08.089 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.089 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:08.089 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.089 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.089 spare_malloc 00:12:08.089 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.089 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:08.089 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.089 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.089 spare_delay 00:12:08.089 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.089 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:08.089 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.089 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.089 [2024-12-13 04:28:08.068719] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:08.089 [2024-12-13 04:28:08.068772] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.089 [2024-12-13 04:28:08.068795] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:08.089 [2024-12-13 04:28:08.068805] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.089 [2024-12-13 04:28:08.071259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.089 [2024-12-13 04:28:08.071299] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:08.089 spare 00:12:08.089 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.089 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:08.089 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.089 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.089 [2024-12-13 04:28:08.080743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:08.089 [2024-12-13 04:28:08.082870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:08.089 [2024-12-13 04:28:08.083214] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:12:08.089 [2024-12-13 04:28:08.083234] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:08.089 [2024-12-13 04:28:08.083551] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:12:08.089 [2024-12-13 04:28:08.083720] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:12:08.089 [2024-12-13 04:28:08.083741] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:12:08.089 [2024-12-13 04:28:08.083901] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:08.089 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.089 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:08.089 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:08.089 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:08.089 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.089 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.089 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:08.089 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.089 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.089 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.089 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.089 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.089 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.089 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.089 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.350 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.350 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.350 "name": "raid_bdev1", 00:12:08.350 "uuid": "572c7d83-8475-4791-b8ea-d0cf2d9d2cff", 00:12:08.350 "strip_size_kb": 0, 00:12:08.350 "state": "online", 00:12:08.350 "raid_level": "raid1", 00:12:08.350 "superblock": true, 00:12:08.350 "num_base_bdevs": 2, 00:12:08.350 "num_base_bdevs_discovered": 2, 00:12:08.350 "num_base_bdevs_operational": 2, 00:12:08.350 "base_bdevs_list": [ 00:12:08.350 { 00:12:08.350 "name": "BaseBdev1", 00:12:08.350 "uuid": "d7359a01-4d7b-5afc-8854-299089d4c321", 00:12:08.350 "is_configured": true, 00:12:08.350 "data_offset": 2048, 00:12:08.350 "data_size": 63488 00:12:08.350 }, 00:12:08.350 { 00:12:08.350 "name": "BaseBdev2", 00:12:08.350 "uuid": "8d453d86-0fd7-5c15-a9e3-3f91fe59afee", 00:12:08.350 "is_configured": true, 00:12:08.350 "data_offset": 2048, 00:12:08.350 "data_size": 63488 00:12:08.350 } 00:12:08.350 ] 00:12:08.350 }' 00:12:08.350 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.350 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.611 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:08.611 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.611 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.611 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:08.611 [2024-12-13 04:28:08.464801] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:08.611 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.611 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:08.611 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.611 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:08.611 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.611 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.611 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.611 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:08.611 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:08.611 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:08.611 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:08.611 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.612 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.612 [2024-12-13 04:28:08.564420] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:08.612 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.612 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:08.612 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:08.612 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:08.612 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.612 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.612 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:08.612 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.612 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.612 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.612 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.612 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.612 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.612 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.612 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.612 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.612 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.612 "name": "raid_bdev1", 00:12:08.612 "uuid": "572c7d83-8475-4791-b8ea-d0cf2d9d2cff", 00:12:08.612 "strip_size_kb": 0, 00:12:08.612 "state": "online", 00:12:08.612 "raid_level": "raid1", 00:12:08.612 "superblock": true, 00:12:08.612 "num_base_bdevs": 2, 00:12:08.612 "num_base_bdevs_discovered": 1, 00:12:08.612 "num_base_bdevs_operational": 1, 00:12:08.612 "base_bdevs_list": [ 00:12:08.612 { 00:12:08.612 "name": null, 00:12:08.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.612 "is_configured": false, 00:12:08.612 "data_offset": 0, 00:12:08.612 "data_size": 63488 00:12:08.612 }, 00:12:08.612 { 00:12:08.612 "name": "BaseBdev2", 00:12:08.612 "uuid": "8d453d86-0fd7-5c15-a9e3-3f91fe59afee", 00:12:08.612 "is_configured": true, 00:12:08.612 "data_offset": 2048, 00:12:08.612 "data_size": 63488 00:12:08.612 } 00:12:08.612 ] 00:12:08.612 }' 00:12:08.612 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.612 04:28:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.872 [2024-12-13 04:28:08.664966] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:12:08.872 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:08.872 Zero copy mechanism will not be used. 00:12:08.872 Running I/O for 60 seconds... 00:12:09.132 04:28:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:09.132 04:28:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.132 04:28:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:09.132 [2024-12-13 04:28:09.014253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:09.132 04:28:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.132 04:28:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:09.132 [2024-12-13 04:28:09.071853] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:12:09.132 [2024-12-13 04:28:09.074274] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:09.392 [2024-12-13 04:28:09.187026] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:09.392 [2024-12-13 04:28:09.187850] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:09.392 [2024-12-13 04:28:09.402542] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:09.392 [2024-12-13 04:28:09.402830] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:09.961 183.00 IOPS, 549.00 MiB/s [2024-12-13T04:28:09.976Z] [2024-12-13 04:28:09.714677] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:09.961 [2024-12-13 04:28:09.715403] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:09.961 [2024-12-13 04:28:09.933072] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:10.220 04:28:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:10.220 04:28:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:10.220 04:28:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:10.220 04:28:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:10.221 04:28:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:10.221 04:28:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.221 04:28:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.221 04:28:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.221 04:28:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:10.221 04:28:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.221 04:28:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:10.221 "name": "raid_bdev1", 00:12:10.221 "uuid": "572c7d83-8475-4791-b8ea-d0cf2d9d2cff", 00:12:10.221 "strip_size_kb": 0, 00:12:10.221 "state": "online", 00:12:10.221 "raid_level": "raid1", 00:12:10.221 "superblock": true, 00:12:10.221 "num_base_bdevs": 2, 00:12:10.221 "num_base_bdevs_discovered": 2, 00:12:10.221 "num_base_bdevs_operational": 2, 00:12:10.221 "process": { 00:12:10.221 "type": "rebuild", 00:12:10.221 "target": "spare", 00:12:10.221 "progress": { 00:12:10.221 "blocks": 10240, 00:12:10.221 "percent": 16 00:12:10.221 } 00:12:10.221 }, 00:12:10.221 "base_bdevs_list": [ 00:12:10.221 { 00:12:10.221 "name": "spare", 00:12:10.221 "uuid": "c7785d71-c2ac-53c6-936d-1309959145e7", 00:12:10.221 "is_configured": true, 00:12:10.221 "data_offset": 2048, 00:12:10.221 "data_size": 63488 00:12:10.221 }, 00:12:10.221 { 00:12:10.221 "name": "BaseBdev2", 00:12:10.221 "uuid": "8d453d86-0fd7-5c15-a9e3-3f91fe59afee", 00:12:10.221 "is_configured": true, 00:12:10.221 "data_offset": 2048, 00:12:10.221 "data_size": 63488 00:12:10.221 } 00:12:10.221 ] 00:12:10.221 }' 00:12:10.221 04:28:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:10.221 04:28:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:10.221 04:28:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:10.221 [2024-12-13 04:28:10.189281] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:10.221 04:28:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:10.221 04:28:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:10.221 04:28:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.221 04:28:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:10.221 [2024-12-13 04:28:10.215618] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:10.481 [2024-12-13 04:28:10.393086] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:10.481 [2024-12-13 04:28:10.401755] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:10.481 [2024-12-13 04:28:10.401896] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:10.481 [2024-12-13 04:28:10.401947] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:10.481 [2024-12-13 04:28:10.424344] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000026d0 00:12:10.481 04:28:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.481 04:28:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:10.481 04:28:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:10.481 04:28:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:10.481 04:28:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.481 04:28:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.481 04:28:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:10.481 04:28:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.481 04:28:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.481 04:28:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.481 04:28:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.481 04:28:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.481 04:28:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.481 04:28:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.481 04:28:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:10.481 04:28:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.481 04:28:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.481 "name": "raid_bdev1", 00:12:10.481 "uuid": "572c7d83-8475-4791-b8ea-d0cf2d9d2cff", 00:12:10.481 "strip_size_kb": 0, 00:12:10.481 "state": "online", 00:12:10.481 "raid_level": "raid1", 00:12:10.481 "superblock": true, 00:12:10.481 "num_base_bdevs": 2, 00:12:10.481 "num_base_bdevs_discovered": 1, 00:12:10.481 "num_base_bdevs_operational": 1, 00:12:10.481 "base_bdevs_list": [ 00:12:10.481 { 00:12:10.481 "name": null, 00:12:10.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.481 "is_configured": false, 00:12:10.481 "data_offset": 0, 00:12:10.481 "data_size": 63488 00:12:10.481 }, 00:12:10.481 { 00:12:10.481 "name": "BaseBdev2", 00:12:10.481 "uuid": "8d453d86-0fd7-5c15-a9e3-3f91fe59afee", 00:12:10.481 "is_configured": true, 00:12:10.481 "data_offset": 2048, 00:12:10.481 "data_size": 63488 00:12:10.481 } 00:12:10.481 ] 00:12:10.481 }' 00:12:10.481 04:28:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.481 04:28:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.001 153.00 IOPS, 459.00 MiB/s [2024-12-13T04:28:11.016Z] 04:28:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:11.001 04:28:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:11.001 04:28:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:11.001 04:28:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:11.001 04:28:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:11.001 04:28:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.001 04:28:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.001 04:28:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.001 04:28:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.001 04:28:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.001 04:28:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:11.001 "name": "raid_bdev1", 00:12:11.001 "uuid": "572c7d83-8475-4791-b8ea-d0cf2d9d2cff", 00:12:11.001 "strip_size_kb": 0, 00:12:11.001 "state": "online", 00:12:11.001 "raid_level": "raid1", 00:12:11.001 "superblock": true, 00:12:11.001 "num_base_bdevs": 2, 00:12:11.001 "num_base_bdevs_discovered": 1, 00:12:11.001 "num_base_bdevs_operational": 1, 00:12:11.001 "base_bdevs_list": [ 00:12:11.001 { 00:12:11.001 "name": null, 00:12:11.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.001 "is_configured": false, 00:12:11.001 "data_offset": 0, 00:12:11.001 "data_size": 63488 00:12:11.001 }, 00:12:11.001 { 00:12:11.001 "name": "BaseBdev2", 00:12:11.001 "uuid": "8d453d86-0fd7-5c15-a9e3-3f91fe59afee", 00:12:11.001 "is_configured": true, 00:12:11.001 "data_offset": 2048, 00:12:11.001 "data_size": 63488 00:12:11.001 } 00:12:11.001 ] 00:12:11.001 }' 00:12:11.001 04:28:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:11.001 04:28:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:11.001 04:28:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:11.261 04:28:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:11.261 04:28:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:11.261 04:28:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.261 04:28:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.261 [2024-12-13 04:28:11.051564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:11.261 04:28:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.261 04:28:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:11.261 [2024-12-13 04:28:11.097266] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:12:11.261 [2024-12-13 04:28:11.099626] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:11.261 [2024-12-13 04:28:11.218581] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:11.261 [2024-12-13 04:28:11.219578] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:11.521 [2024-12-13 04:28:11.434232] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:11.521 [2024-12-13 04:28:11.434608] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:11.781 178.00 IOPS, 534.00 MiB/s [2024-12-13T04:28:11.796Z] [2024-12-13 04:28:11.670115] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:12.041 [2024-12-13 04:28:11.880047] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:12.041 [2024-12-13 04:28:11.880656] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:12.300 04:28:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:12.300 04:28:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:12.300 04:28:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:12.300 04:28:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:12.300 04:28:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:12.300 04:28:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.300 [2024-12-13 04:28:12.097973] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 1 04:28:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.300 2288 offset_end: 18432 00:12:12.300 04:28:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.300 04:28:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:12.300 04:28:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.300 04:28:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:12.300 "name": "raid_bdev1", 00:12:12.300 "uuid": "572c7d83-8475-4791-b8ea-d0cf2d9d2cff", 00:12:12.300 "strip_size_kb": 0, 00:12:12.300 "state": "online", 00:12:12.300 "raid_level": "raid1", 00:12:12.300 "superblock": true, 00:12:12.300 "num_base_bdevs": 2, 00:12:12.300 "num_base_bdevs_discovered": 2, 00:12:12.300 "num_base_bdevs_operational": 2, 00:12:12.300 "process": { 00:12:12.300 "type": "rebuild", 00:12:12.300 "target": "spare", 00:12:12.300 "progress": { 00:12:12.300 "blocks": 14336, 00:12:12.300 "percent": 22 00:12:12.300 } 00:12:12.301 }, 00:12:12.301 "base_bdevs_list": [ 00:12:12.301 { 00:12:12.301 "name": "spare", 00:12:12.301 "uuid": "c7785d71-c2ac-53c6-936d-1309959145e7", 00:12:12.301 "is_configured": true, 00:12:12.301 "data_offset": 2048, 00:12:12.301 "data_size": 63488 00:12:12.301 }, 00:12:12.301 { 00:12:12.301 "name": "BaseBdev2", 00:12:12.301 "uuid": "8d453d86-0fd7-5c15-a9e3-3f91fe59afee", 00:12:12.301 "is_configured": true, 00:12:12.301 "data_offset": 2048, 00:12:12.301 "data_size": 63488 00:12:12.301 } 00:12:12.301 ] 00:12:12.301 }' 00:12:12.301 04:28:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:12.301 04:28:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:12.301 04:28:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:12.301 04:28:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:12.301 04:28:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:12.301 04:28:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:12.301 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:12.301 04:28:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:12.301 04:28:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:12.301 04:28:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:12.301 04:28:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=341 00:12:12.301 04:28:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:12.301 04:28:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:12.301 04:28:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:12.301 04:28:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:12.301 04:28:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:12.301 04:28:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:12.301 04:28:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.301 04:28:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.301 04:28:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.301 04:28:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:12.301 04:28:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.301 04:28:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:12.301 "name": "raid_bdev1", 00:12:12.301 "uuid": "572c7d83-8475-4791-b8ea-d0cf2d9d2cff", 00:12:12.301 "strip_size_kb": 0, 00:12:12.301 "state": "online", 00:12:12.301 "raid_level": "raid1", 00:12:12.301 "superblock": true, 00:12:12.301 "num_base_bdevs": 2, 00:12:12.301 "num_base_bdevs_discovered": 2, 00:12:12.301 "num_base_bdevs_operational": 2, 00:12:12.301 "process": { 00:12:12.301 "type": "rebuild", 00:12:12.301 "target": "spare", 00:12:12.301 "progress": { 00:12:12.301 "blocks": 14336, 00:12:12.301 "percent": 22 00:12:12.301 } 00:12:12.301 }, 00:12:12.301 "base_bdevs_list": [ 00:12:12.301 { 00:12:12.301 "name": "spare", 00:12:12.301 "uuid": "c7785d71-c2ac-53c6-936d-1309959145e7", 00:12:12.301 "is_configured": true, 00:12:12.301 "data_offset": 2048, 00:12:12.301 "data_size": 63488 00:12:12.301 }, 00:12:12.301 { 00:12:12.301 "name": "BaseBdev2", 00:12:12.301 "uuid": "8d453d86-0fd7-5c15-a9e3-3f91fe59afee", 00:12:12.301 "is_configured": true, 00:12:12.301 "data_offset": 2048, 00:12:12.301 "data_size": 63488 00:12:12.301 } 00:12:12.301 ] 00:12:12.301 }' 00:12:12.301 04:28:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:12.301 [2024-12-13 04:28:12.312587] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:12.301 04:28:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:12.560 04:28:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:12.560 04:28:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:12.560 04:28:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:12.560 [2024-12-13 04:28:12.538580] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:12.820 154.75 IOPS, 464.25 MiB/s [2024-12-13T04:28:12.835Z] [2024-12-13 04:28:12.774212] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:12.820 [2024-12-13 04:28:12.774447] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:13.390 [2024-12-13 04:28:13.251019] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:13.390 04:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:13.390 04:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:13.390 04:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:13.390 04:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:13.390 04:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:13.390 04:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:13.390 04:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.390 04:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.390 04:28:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.390 04:28:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:13.390 04:28:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.390 04:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:13.390 "name": "raid_bdev1", 00:12:13.390 "uuid": "572c7d83-8475-4791-b8ea-d0cf2d9d2cff", 00:12:13.390 "strip_size_kb": 0, 00:12:13.390 "state": "online", 00:12:13.390 "raid_level": "raid1", 00:12:13.390 "superblock": true, 00:12:13.390 "num_base_bdevs": 2, 00:12:13.390 "num_base_bdevs_discovered": 2, 00:12:13.390 "num_base_bdevs_operational": 2, 00:12:13.390 "process": { 00:12:13.390 "type": "rebuild", 00:12:13.390 "target": "spare", 00:12:13.390 "progress": { 00:12:13.390 "blocks": 28672, 00:12:13.390 "percent": 45 00:12:13.390 } 00:12:13.390 }, 00:12:13.390 "base_bdevs_list": [ 00:12:13.390 { 00:12:13.390 "name": "spare", 00:12:13.390 "uuid": "c7785d71-c2ac-53c6-936d-1309959145e7", 00:12:13.390 "is_configured": true, 00:12:13.390 "data_offset": 2048, 00:12:13.390 "data_size": 63488 00:12:13.390 }, 00:12:13.390 { 00:12:13.390 "name": "BaseBdev2", 00:12:13.390 "uuid": "8d453d86-0fd7-5c15-a9e3-3f91fe59afee", 00:12:13.390 "is_configured": true, 00:12:13.390 "data_offset": 2048, 00:12:13.390 "data_size": 63488 00:12:13.390 } 00:12:13.390 ] 00:12:13.390 }' 00:12:13.390 04:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:13.649 04:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:13.649 04:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:13.649 04:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:13.649 04:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:13.649 [2024-12-13 04:28:13.567525] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:13.909 132.00 IOPS, 396.00 MiB/s [2024-12-13T04:28:13.924Z] [2024-12-13 04:28:13.680879] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:14.169 [2024-12-13 04:28:14.127314] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:14.739 04:28:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:14.739 04:28:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:14.739 04:28:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:14.739 04:28:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:14.739 04:28:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:14.739 04:28:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:14.739 04:28:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.739 04:28:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.739 04:28:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.739 04:28:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.739 04:28:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.739 04:28:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:14.739 "name": "raid_bdev1", 00:12:14.739 "uuid": "572c7d83-8475-4791-b8ea-d0cf2d9d2cff", 00:12:14.739 "strip_size_kb": 0, 00:12:14.739 "state": "online", 00:12:14.739 "raid_level": "raid1", 00:12:14.739 "superblock": true, 00:12:14.739 "num_base_bdevs": 2, 00:12:14.739 "num_base_bdevs_discovered": 2, 00:12:14.739 "num_base_bdevs_operational": 2, 00:12:14.739 "process": { 00:12:14.739 "type": "rebuild", 00:12:14.739 "target": "spare", 00:12:14.739 "progress": { 00:12:14.739 "blocks": 45056, 00:12:14.739 "percent": 70 00:12:14.739 } 00:12:14.739 }, 00:12:14.739 "base_bdevs_list": [ 00:12:14.739 { 00:12:14.739 "name": "spare", 00:12:14.739 "uuid": "c7785d71-c2ac-53c6-936d-1309959145e7", 00:12:14.739 "is_configured": true, 00:12:14.739 "data_offset": 2048, 00:12:14.739 "data_size": 63488 00:12:14.739 }, 00:12:14.739 { 00:12:14.739 "name": "BaseBdev2", 00:12:14.739 "uuid": "8d453d86-0fd7-5c15-a9e3-3f91fe59afee", 00:12:14.739 "is_configured": true, 00:12:14.739 "data_offset": 2048, 00:12:14.739 "data_size": 63488 00:12:14.739 } 00:12:14.739 ] 00:12:14.739 }' 00:12:14.739 04:28:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:14.739 04:28:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:14.739 04:28:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:14.739 04:28:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:14.739 04:28:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:15.002 119.83 IOPS, 359.50 MiB/s [2024-12-13T04:28:15.017Z] [2024-12-13 04:28:14.801433] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:12:15.002 [2024-12-13 04:28:14.908062] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:12:15.573 [2024-12-13 04:28:15.548825] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:15.832 04:28:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:15.832 04:28:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:15.832 04:28:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:15.832 04:28:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:15.832 04:28:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:15.832 04:28:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:15.832 04:28:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.832 04:28:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.832 04:28:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.832 04:28:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:15.832 [2024-12-13 04:28:15.654162] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:15.832 [2024-12-13 04:28:15.658217] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:15.832 04:28:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.832 107.00 IOPS, 321.00 MiB/s [2024-12-13T04:28:15.847Z] 04:28:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:15.832 "name": "raid_bdev1", 00:12:15.832 "uuid": "572c7d83-8475-4791-b8ea-d0cf2d9d2cff", 00:12:15.832 "strip_size_kb": 0, 00:12:15.832 "state": "online", 00:12:15.832 "raid_level": "raid1", 00:12:15.832 "superblock": true, 00:12:15.832 "num_base_bdevs": 2, 00:12:15.832 "num_base_bdevs_discovered": 2, 00:12:15.832 "num_base_bdevs_operational": 2, 00:12:15.832 "process": { 00:12:15.832 "type": "rebuild", 00:12:15.832 "target": "spare", 00:12:15.832 "progress": { 00:12:15.832 "blocks": 63488, 00:12:15.832 "percent": 100 00:12:15.832 } 00:12:15.832 }, 00:12:15.832 "base_bdevs_list": [ 00:12:15.832 { 00:12:15.832 "name": "spare", 00:12:15.832 "uuid": "c7785d71-c2ac-53c6-936d-1309959145e7", 00:12:15.832 "is_configured": true, 00:12:15.832 "data_offset": 2048, 00:12:15.832 "data_size": 63488 00:12:15.832 }, 00:12:15.832 { 00:12:15.832 "name": "BaseBdev2", 00:12:15.832 "uuid": "8d453d86-0fd7-5c15-a9e3-3f91fe59afee", 00:12:15.832 "is_configured": true, 00:12:15.832 "data_offset": 2048, 00:12:15.832 "data_size": 63488 00:12:15.832 } 00:12:15.832 ] 00:12:15.832 }' 00:12:15.832 04:28:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:15.832 04:28:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:15.832 04:28:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:15.832 04:28:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:15.832 04:28:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:16.771 98.38 IOPS, 295.12 MiB/s [2024-12-13T04:28:16.786Z] 04:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:16.771 04:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:16.771 04:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:16.771 04:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:16.771 04:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:16.771 04:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:17.031 04:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.031 04:28:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.031 04:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.031 04:28:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.031 04:28:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.031 04:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:17.031 "name": "raid_bdev1", 00:12:17.031 "uuid": "572c7d83-8475-4791-b8ea-d0cf2d9d2cff", 00:12:17.031 "strip_size_kb": 0, 00:12:17.031 "state": "online", 00:12:17.031 "raid_level": "raid1", 00:12:17.031 "superblock": true, 00:12:17.031 "num_base_bdevs": 2, 00:12:17.031 "num_base_bdevs_discovered": 2, 00:12:17.031 "num_base_bdevs_operational": 2, 00:12:17.031 "base_bdevs_list": [ 00:12:17.031 { 00:12:17.031 "name": "spare", 00:12:17.031 "uuid": "c7785d71-c2ac-53c6-936d-1309959145e7", 00:12:17.031 "is_configured": true, 00:12:17.031 "data_offset": 2048, 00:12:17.031 "data_size": 63488 00:12:17.031 }, 00:12:17.031 { 00:12:17.031 "name": "BaseBdev2", 00:12:17.031 "uuid": "8d453d86-0fd7-5c15-a9e3-3f91fe59afee", 00:12:17.031 "is_configured": true, 00:12:17.031 "data_offset": 2048, 00:12:17.031 "data_size": 63488 00:12:17.031 } 00:12:17.031 ] 00:12:17.031 }' 00:12:17.031 04:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:17.031 04:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:17.031 04:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:17.031 04:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:17.031 04:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:12:17.031 04:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:17.031 04:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:17.031 04:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:17.031 04:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:17.031 04:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:17.031 04:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.031 04:28:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.031 04:28:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.031 04:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.031 04:28:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.031 04:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:17.031 "name": "raid_bdev1", 00:12:17.031 "uuid": "572c7d83-8475-4791-b8ea-d0cf2d9d2cff", 00:12:17.031 "strip_size_kb": 0, 00:12:17.031 "state": "online", 00:12:17.031 "raid_level": "raid1", 00:12:17.031 "superblock": true, 00:12:17.031 "num_base_bdevs": 2, 00:12:17.031 "num_base_bdevs_discovered": 2, 00:12:17.031 "num_base_bdevs_operational": 2, 00:12:17.031 "base_bdevs_list": [ 00:12:17.031 { 00:12:17.031 "name": "spare", 00:12:17.031 "uuid": "c7785d71-c2ac-53c6-936d-1309959145e7", 00:12:17.031 "is_configured": true, 00:12:17.031 "data_offset": 2048, 00:12:17.031 "data_size": 63488 00:12:17.031 }, 00:12:17.031 { 00:12:17.031 "name": "BaseBdev2", 00:12:17.031 "uuid": "8d453d86-0fd7-5c15-a9e3-3f91fe59afee", 00:12:17.031 "is_configured": true, 00:12:17.031 "data_offset": 2048, 00:12:17.031 "data_size": 63488 00:12:17.031 } 00:12:17.031 ] 00:12:17.031 }' 00:12:17.031 04:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:17.031 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:17.031 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:17.290 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:17.290 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:17.291 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:17.291 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:17.291 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.291 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.291 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:17.291 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.291 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.291 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.291 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.291 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.291 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.291 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.291 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.291 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.291 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.291 "name": "raid_bdev1", 00:12:17.291 "uuid": "572c7d83-8475-4791-b8ea-d0cf2d9d2cff", 00:12:17.291 "strip_size_kb": 0, 00:12:17.291 "state": "online", 00:12:17.291 "raid_level": "raid1", 00:12:17.291 "superblock": true, 00:12:17.291 "num_base_bdevs": 2, 00:12:17.291 "num_base_bdevs_discovered": 2, 00:12:17.291 "num_base_bdevs_operational": 2, 00:12:17.291 "base_bdevs_list": [ 00:12:17.291 { 00:12:17.291 "name": "spare", 00:12:17.291 "uuid": "c7785d71-c2ac-53c6-936d-1309959145e7", 00:12:17.291 "is_configured": true, 00:12:17.291 "data_offset": 2048, 00:12:17.291 "data_size": 63488 00:12:17.291 }, 00:12:17.291 { 00:12:17.291 "name": "BaseBdev2", 00:12:17.291 "uuid": "8d453d86-0fd7-5c15-a9e3-3f91fe59afee", 00:12:17.291 "is_configured": true, 00:12:17.291 "data_offset": 2048, 00:12:17.291 "data_size": 63488 00:12:17.291 } 00:12:17.291 ] 00:12:17.291 }' 00:12:17.291 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.291 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.550 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:17.550 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.550 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.550 [2024-12-13 04:28:17.490033] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:17.550 [2024-12-13 04:28:17.490181] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:17.810 00:12:17.810 Latency(us) 00:12:17.810 [2024-12-13T04:28:17.825Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:17.810 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:17.810 raid_bdev1 : 8.94 92.73 278.20 0.00 0.00 14674.04 286.18 113099.68 00:12:17.810 [2024-12-13T04:28:17.825Z] =================================================================================================================== 00:12:17.810 [2024-12-13T04:28:17.825Z] Total : 92.73 278.20 0.00 0.00 14674.04 286.18 113099.68 00:12:17.810 [2024-12-13 04:28:17.593785] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:17.810 [2024-12-13 04:28:17.593922] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:17.810 [2024-12-13 04:28:17.594042] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:17.810 [2024-12-13 04:28:17.594122] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:12:17.810 { 00:12:17.810 "results": [ 00:12:17.810 { 00:12:17.810 "job": "raid_bdev1", 00:12:17.810 "core_mask": "0x1", 00:12:17.810 "workload": "randrw", 00:12:17.810 "percentage": 50, 00:12:17.810 "status": "finished", 00:12:17.810 "queue_depth": 2, 00:12:17.810 "io_size": 3145728, 00:12:17.810 "runtime": 8.939484, 00:12:17.810 "iops": 92.73465895794433, 00:12:17.810 "mibps": 278.20397687383297, 00:12:17.810 "io_failed": 0, 00:12:17.810 "io_timeout": 0, 00:12:17.810 "avg_latency_us": 14674.041202901375, 00:12:17.810 "min_latency_us": 286.1834061135371, 00:12:17.810 "max_latency_us": 113099.68209606987 00:12:17.810 } 00:12:17.810 ], 00:12:17.810 "core_count": 1 00:12:17.810 } 00:12:17.810 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.810 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.810 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:17.810 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.810 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.810 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.810 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:17.810 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:17.810 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:17.810 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:17.810 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:17.810 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:17.810 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:17.810 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:17.810 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:17.810 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:17.810 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:17.810 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:17.810 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:18.070 /dev/nbd0 00:12:18.070 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:18.070 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:18.070 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:18.070 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:12:18.070 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:18.070 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:18.070 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:18.070 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:12:18.070 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:18.070 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:18.070 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:18.070 1+0 records in 00:12:18.070 1+0 records out 00:12:18.070 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253229 s, 16.2 MB/s 00:12:18.070 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:18.070 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:12:18.070 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:18.070 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:18.070 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:12:18.070 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:18.070 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:18.070 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:18.070 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:18.070 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:18.070 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:18.070 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:18.070 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:18.070 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:18.070 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:18.070 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:18.070 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:18.070 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:18.070 04:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:18.331 /dev/nbd1 00:12:18.331 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:18.331 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:18.331 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:18.331 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:12:18.331 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:18.331 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:18.331 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:18.331 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:12:18.331 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:18.331 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:18.331 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:18.331 1+0 records in 00:12:18.331 1+0 records out 00:12:18.331 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000295608 s, 13.9 MB/s 00:12:18.331 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:18.331 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:12:18.331 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:18.331 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:18.331 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:12:18.331 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:18.331 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:18.331 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:18.331 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:18.331 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:18.331 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:18.331 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:18.331 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:18.331 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:18.331 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:18.591 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:18.591 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:18.591 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:18.591 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:18.591 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:18.591 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:18.591 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:18.591 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:18.591 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:18.591 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:18.591 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:18.591 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:18.591 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:18.591 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:18.591 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:18.851 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:18.851 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:18.851 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:18.851 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:18.851 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:18.851 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:18.851 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:18.851 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:18.851 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:18.851 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:18.851 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.851 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:18.851 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.851 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:18.851 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.851 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:18.851 [2024-12-13 04:28:18.680245] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:18.851 [2024-12-13 04:28:18.680400] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.851 [2024-12-13 04:28:18.680471] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:12:18.851 [2024-12-13 04:28:18.680516] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.851 [2024-12-13 04:28:18.683092] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.851 [2024-12-13 04:28:18.683200] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:18.851 [2024-12-13 04:28:18.683338] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:18.851 [2024-12-13 04:28:18.683417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:18.851 [2024-12-13 04:28:18.683624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:18.851 spare 00:12:18.851 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.851 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:18.851 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.851 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:18.851 [2024-12-13 04:28:18.783588] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:12:18.851 [2024-12-13 04:28:18.783674] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:18.851 [2024-12-13 04:28:18.784086] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027720 00:12:18.851 [2024-12-13 04:28:18.784335] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:12:18.851 [2024-12-13 04:28:18.784395] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:12:18.851 [2024-12-13 04:28:18.784680] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:18.851 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.851 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:18.852 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:18.852 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:18.852 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.852 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.852 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:18.852 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.852 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.852 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.852 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.852 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.852 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.852 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.852 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:18.852 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.852 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.852 "name": "raid_bdev1", 00:12:18.852 "uuid": "572c7d83-8475-4791-b8ea-d0cf2d9d2cff", 00:12:18.852 "strip_size_kb": 0, 00:12:18.852 "state": "online", 00:12:18.852 "raid_level": "raid1", 00:12:18.852 "superblock": true, 00:12:18.852 "num_base_bdevs": 2, 00:12:18.852 "num_base_bdevs_discovered": 2, 00:12:18.852 "num_base_bdevs_operational": 2, 00:12:18.852 "base_bdevs_list": [ 00:12:18.852 { 00:12:18.852 "name": "spare", 00:12:18.852 "uuid": "c7785d71-c2ac-53c6-936d-1309959145e7", 00:12:18.852 "is_configured": true, 00:12:18.852 "data_offset": 2048, 00:12:18.852 "data_size": 63488 00:12:18.852 }, 00:12:18.852 { 00:12:18.852 "name": "BaseBdev2", 00:12:18.852 "uuid": "8d453d86-0fd7-5c15-a9e3-3f91fe59afee", 00:12:18.852 "is_configured": true, 00:12:18.852 "data_offset": 2048, 00:12:18.852 "data_size": 63488 00:12:18.852 } 00:12:18.852 ] 00:12:18.852 }' 00:12:18.852 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.852 04:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.421 04:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:19.421 04:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:19.421 04:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:19.421 04:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:19.421 04:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:19.421 04:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.421 04:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.421 04:28:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.421 04:28:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.421 04:28:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.421 04:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:19.421 "name": "raid_bdev1", 00:12:19.421 "uuid": "572c7d83-8475-4791-b8ea-d0cf2d9d2cff", 00:12:19.421 "strip_size_kb": 0, 00:12:19.421 "state": "online", 00:12:19.421 "raid_level": "raid1", 00:12:19.421 "superblock": true, 00:12:19.422 "num_base_bdevs": 2, 00:12:19.422 "num_base_bdevs_discovered": 2, 00:12:19.422 "num_base_bdevs_operational": 2, 00:12:19.422 "base_bdevs_list": [ 00:12:19.422 { 00:12:19.422 "name": "spare", 00:12:19.422 "uuid": "c7785d71-c2ac-53c6-936d-1309959145e7", 00:12:19.422 "is_configured": true, 00:12:19.422 "data_offset": 2048, 00:12:19.422 "data_size": 63488 00:12:19.422 }, 00:12:19.422 { 00:12:19.422 "name": "BaseBdev2", 00:12:19.422 "uuid": "8d453d86-0fd7-5c15-a9e3-3f91fe59afee", 00:12:19.422 "is_configured": true, 00:12:19.422 "data_offset": 2048, 00:12:19.422 "data_size": 63488 00:12:19.422 } 00:12:19.422 ] 00:12:19.422 }' 00:12:19.422 04:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:19.422 04:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:19.422 04:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:19.422 04:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:19.422 04:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.422 04:28:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.422 04:28:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.422 04:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:19.422 04:28:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.422 04:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:19.422 04:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:19.422 04:28:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.422 04:28:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.422 [2024-12-13 04:28:19.412570] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:19.422 04:28:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.422 04:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:19.422 04:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:19.422 04:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:19.422 04:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:19.422 04:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:19.422 04:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:19.422 04:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.422 04:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.422 04:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.422 04:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.422 04:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.422 04:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.422 04:28:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.422 04:28:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.681 04:28:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.681 04:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.681 "name": "raid_bdev1", 00:12:19.681 "uuid": "572c7d83-8475-4791-b8ea-d0cf2d9d2cff", 00:12:19.681 "strip_size_kb": 0, 00:12:19.681 "state": "online", 00:12:19.681 "raid_level": "raid1", 00:12:19.681 "superblock": true, 00:12:19.681 "num_base_bdevs": 2, 00:12:19.681 "num_base_bdevs_discovered": 1, 00:12:19.681 "num_base_bdevs_operational": 1, 00:12:19.681 "base_bdevs_list": [ 00:12:19.681 { 00:12:19.681 "name": null, 00:12:19.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.681 "is_configured": false, 00:12:19.681 "data_offset": 0, 00:12:19.681 "data_size": 63488 00:12:19.681 }, 00:12:19.681 { 00:12:19.681 "name": "BaseBdev2", 00:12:19.681 "uuid": "8d453d86-0fd7-5c15-a9e3-3f91fe59afee", 00:12:19.681 "is_configured": true, 00:12:19.681 "data_offset": 2048, 00:12:19.681 "data_size": 63488 00:12:19.681 } 00:12:19.681 ] 00:12:19.681 }' 00:12:19.681 04:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.681 04:28:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.941 04:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:19.941 04:28:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.941 04:28:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.941 [2024-12-13 04:28:19.812636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:19.941 [2024-12-13 04:28:19.812957] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:19.941 [2024-12-13 04:28:19.813030] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:19.941 [2024-12-13 04:28:19.813106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:19.941 [2024-12-13 04:28:19.822554] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000277f0 00:12:19.941 04:28:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.941 04:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:19.941 [2024-12-13 04:28:19.824902] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:20.880 04:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:20.880 04:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:20.880 04:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:20.880 04:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:20.880 04:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:20.880 04:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.880 04:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.880 04:28:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.880 04:28:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.880 04:28:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.880 04:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:20.880 "name": "raid_bdev1", 00:12:20.880 "uuid": "572c7d83-8475-4791-b8ea-d0cf2d9d2cff", 00:12:20.880 "strip_size_kb": 0, 00:12:20.880 "state": "online", 00:12:20.880 "raid_level": "raid1", 00:12:20.880 "superblock": true, 00:12:20.880 "num_base_bdevs": 2, 00:12:20.880 "num_base_bdevs_discovered": 2, 00:12:20.880 "num_base_bdevs_operational": 2, 00:12:20.880 "process": { 00:12:20.880 "type": "rebuild", 00:12:20.880 "target": "spare", 00:12:20.880 "progress": { 00:12:20.880 "blocks": 20480, 00:12:20.880 "percent": 32 00:12:20.880 } 00:12:20.880 }, 00:12:20.880 "base_bdevs_list": [ 00:12:20.880 { 00:12:20.880 "name": "spare", 00:12:20.880 "uuid": "c7785d71-c2ac-53c6-936d-1309959145e7", 00:12:20.880 "is_configured": true, 00:12:20.880 "data_offset": 2048, 00:12:20.880 "data_size": 63488 00:12:20.880 }, 00:12:20.880 { 00:12:20.880 "name": "BaseBdev2", 00:12:20.880 "uuid": "8d453d86-0fd7-5c15-a9e3-3f91fe59afee", 00:12:20.880 "is_configured": true, 00:12:20.880 "data_offset": 2048, 00:12:20.880 "data_size": 63488 00:12:20.880 } 00:12:20.880 ] 00:12:20.880 }' 00:12:20.880 04:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:21.158 04:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:21.158 04:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:21.158 04:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:21.158 04:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:21.158 04:28:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.158 04:28:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.158 [2024-12-13 04:28:20.985367] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:21.158 [2024-12-13 04:28:21.033020] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:21.158 [2024-12-13 04:28:21.033105] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:21.158 [2024-12-13 04:28:21.033128] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:21.158 [2024-12-13 04:28:21.033138] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:21.158 04:28:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.158 04:28:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:21.158 04:28:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:21.158 04:28:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:21.158 04:28:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.158 04:28:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.158 04:28:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:21.158 04:28:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.158 04:28:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.158 04:28:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.158 04:28:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.158 04:28:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.158 04:28:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.158 04:28:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.158 04:28:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.158 04:28:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.158 04:28:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.158 "name": "raid_bdev1", 00:12:21.158 "uuid": "572c7d83-8475-4791-b8ea-d0cf2d9d2cff", 00:12:21.158 "strip_size_kb": 0, 00:12:21.158 "state": "online", 00:12:21.158 "raid_level": "raid1", 00:12:21.158 "superblock": true, 00:12:21.158 "num_base_bdevs": 2, 00:12:21.158 "num_base_bdevs_discovered": 1, 00:12:21.158 "num_base_bdevs_operational": 1, 00:12:21.158 "base_bdevs_list": [ 00:12:21.158 { 00:12:21.158 "name": null, 00:12:21.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.158 "is_configured": false, 00:12:21.158 "data_offset": 0, 00:12:21.158 "data_size": 63488 00:12:21.158 }, 00:12:21.158 { 00:12:21.158 "name": "BaseBdev2", 00:12:21.158 "uuid": "8d453d86-0fd7-5c15-a9e3-3f91fe59afee", 00:12:21.158 "is_configured": true, 00:12:21.158 "data_offset": 2048, 00:12:21.158 "data_size": 63488 00:12:21.158 } 00:12:21.158 ] 00:12:21.158 }' 00:12:21.158 04:28:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.158 04:28:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.768 04:28:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:21.768 04:28:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.768 04:28:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.768 [2024-12-13 04:28:21.469492] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:21.768 [2024-12-13 04:28:21.469641] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.768 [2024-12-13 04:28:21.469695] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:21.768 [2024-12-13 04:28:21.469733] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.768 [2024-12-13 04:28:21.470346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.768 [2024-12-13 04:28:21.470426] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:21.768 [2024-12-13 04:28:21.470611] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:21.768 [2024-12-13 04:28:21.470662] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:21.768 [2024-12-13 04:28:21.470737] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:21.768 [2024-12-13 04:28:21.470810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:21.768 [2024-12-13 04:28:21.480094] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000278c0 00:12:21.768 spare 00:12:21.768 04:28:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.768 04:28:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:21.769 [2024-12-13 04:28:21.482410] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:22.708 04:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:22.708 04:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:22.708 04:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:22.708 04:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:22.708 04:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:22.708 04:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.708 04:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.708 04:28:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.708 04:28:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:22.708 04:28:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.708 04:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:22.708 "name": "raid_bdev1", 00:12:22.708 "uuid": "572c7d83-8475-4791-b8ea-d0cf2d9d2cff", 00:12:22.708 "strip_size_kb": 0, 00:12:22.708 "state": "online", 00:12:22.708 "raid_level": "raid1", 00:12:22.708 "superblock": true, 00:12:22.708 "num_base_bdevs": 2, 00:12:22.708 "num_base_bdevs_discovered": 2, 00:12:22.708 "num_base_bdevs_operational": 2, 00:12:22.708 "process": { 00:12:22.708 "type": "rebuild", 00:12:22.708 "target": "spare", 00:12:22.708 "progress": { 00:12:22.708 "blocks": 20480, 00:12:22.708 "percent": 32 00:12:22.708 } 00:12:22.708 }, 00:12:22.708 "base_bdevs_list": [ 00:12:22.708 { 00:12:22.708 "name": "spare", 00:12:22.708 "uuid": "c7785d71-c2ac-53c6-936d-1309959145e7", 00:12:22.708 "is_configured": true, 00:12:22.708 "data_offset": 2048, 00:12:22.708 "data_size": 63488 00:12:22.708 }, 00:12:22.708 { 00:12:22.708 "name": "BaseBdev2", 00:12:22.708 "uuid": "8d453d86-0fd7-5c15-a9e3-3f91fe59afee", 00:12:22.708 "is_configured": true, 00:12:22.708 "data_offset": 2048, 00:12:22.708 "data_size": 63488 00:12:22.708 } 00:12:22.708 ] 00:12:22.708 }' 00:12:22.708 04:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:22.708 04:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:22.708 04:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:22.708 04:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:22.708 04:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:22.708 04:28:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.708 04:28:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:22.708 [2024-12-13 04:28:22.622907] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:22.708 [2024-12-13 04:28:22.690705] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:22.708 [2024-12-13 04:28:22.690839] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:22.708 [2024-12-13 04:28:22.690881] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:22.708 [2024-12-13 04:28:22.690925] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:22.708 04:28:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.708 04:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:22.708 04:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:22.708 04:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:22.708 04:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:22.708 04:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:22.709 04:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:22.709 04:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.709 04:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.709 04:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.709 04:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.709 04:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.709 04:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.709 04:28:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.709 04:28:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:22.968 04:28:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.968 04:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.968 "name": "raid_bdev1", 00:12:22.968 "uuid": "572c7d83-8475-4791-b8ea-d0cf2d9d2cff", 00:12:22.968 "strip_size_kb": 0, 00:12:22.968 "state": "online", 00:12:22.968 "raid_level": "raid1", 00:12:22.968 "superblock": true, 00:12:22.968 "num_base_bdevs": 2, 00:12:22.968 "num_base_bdevs_discovered": 1, 00:12:22.968 "num_base_bdevs_operational": 1, 00:12:22.968 "base_bdevs_list": [ 00:12:22.968 { 00:12:22.968 "name": null, 00:12:22.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.968 "is_configured": false, 00:12:22.968 "data_offset": 0, 00:12:22.968 "data_size": 63488 00:12:22.968 }, 00:12:22.968 { 00:12:22.968 "name": "BaseBdev2", 00:12:22.968 "uuid": "8d453d86-0fd7-5c15-a9e3-3f91fe59afee", 00:12:22.968 "is_configured": true, 00:12:22.968 "data_offset": 2048, 00:12:22.968 "data_size": 63488 00:12:22.968 } 00:12:22.968 ] 00:12:22.968 }' 00:12:22.968 04:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.968 04:28:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.228 04:28:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:23.228 04:28:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:23.228 04:28:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:23.228 04:28:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:23.228 04:28:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:23.228 04:28:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.228 04:28:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.228 04:28:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.228 04:28:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.228 04:28:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.488 04:28:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:23.488 "name": "raid_bdev1", 00:12:23.488 "uuid": "572c7d83-8475-4791-b8ea-d0cf2d9d2cff", 00:12:23.488 "strip_size_kb": 0, 00:12:23.488 "state": "online", 00:12:23.488 "raid_level": "raid1", 00:12:23.488 "superblock": true, 00:12:23.488 "num_base_bdevs": 2, 00:12:23.488 "num_base_bdevs_discovered": 1, 00:12:23.488 "num_base_bdevs_operational": 1, 00:12:23.488 "base_bdevs_list": [ 00:12:23.488 { 00:12:23.488 "name": null, 00:12:23.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.488 "is_configured": false, 00:12:23.488 "data_offset": 0, 00:12:23.488 "data_size": 63488 00:12:23.488 }, 00:12:23.488 { 00:12:23.488 "name": "BaseBdev2", 00:12:23.488 "uuid": "8d453d86-0fd7-5c15-a9e3-3f91fe59afee", 00:12:23.488 "is_configured": true, 00:12:23.488 "data_offset": 2048, 00:12:23.488 "data_size": 63488 00:12:23.488 } 00:12:23.488 ] 00:12:23.488 }' 00:12:23.488 04:28:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:23.488 04:28:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:23.488 04:28:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:23.488 04:28:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:23.488 04:28:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:23.488 04:28:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.488 04:28:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.488 04:28:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.488 04:28:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:23.488 04:28:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.488 04:28:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.488 [2024-12-13 04:28:23.374295] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:23.488 [2024-12-13 04:28:23.374366] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.488 [2024-12-13 04:28:23.374391] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:23.488 [2024-12-13 04:28:23.374406] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.488 [2024-12-13 04:28:23.374926] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.488 [2024-12-13 04:28:23.374952] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:23.488 [2024-12-13 04:28:23.375037] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:23.488 [2024-12-13 04:28:23.375056] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:23.489 [2024-12-13 04:28:23.375065] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:23.489 [2024-12-13 04:28:23.375084] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:23.489 BaseBdev1 00:12:23.489 04:28:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.489 04:28:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:24.426 04:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:24.426 04:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:24.426 04:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:24.426 04:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:24.426 04:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:24.426 04:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:24.426 04:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.426 04:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.426 04:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.426 04:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.426 04:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.426 04:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.426 04:28:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.426 04:28:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.426 04:28:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.426 04:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.426 "name": "raid_bdev1", 00:12:24.426 "uuid": "572c7d83-8475-4791-b8ea-d0cf2d9d2cff", 00:12:24.426 "strip_size_kb": 0, 00:12:24.426 "state": "online", 00:12:24.426 "raid_level": "raid1", 00:12:24.426 "superblock": true, 00:12:24.426 "num_base_bdevs": 2, 00:12:24.426 "num_base_bdevs_discovered": 1, 00:12:24.426 "num_base_bdevs_operational": 1, 00:12:24.426 "base_bdevs_list": [ 00:12:24.426 { 00:12:24.426 "name": null, 00:12:24.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.427 "is_configured": false, 00:12:24.427 "data_offset": 0, 00:12:24.427 "data_size": 63488 00:12:24.427 }, 00:12:24.427 { 00:12:24.427 "name": "BaseBdev2", 00:12:24.427 "uuid": "8d453d86-0fd7-5c15-a9e3-3f91fe59afee", 00:12:24.427 "is_configured": true, 00:12:24.427 "data_offset": 2048, 00:12:24.427 "data_size": 63488 00:12:24.427 } 00:12:24.427 ] 00:12:24.427 }' 00:12:24.427 04:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.427 04:28:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.996 04:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:24.996 04:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:24.996 04:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:24.996 04:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:24.996 04:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:24.996 04:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.996 04:28:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.996 04:28:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.996 04:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.996 04:28:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.996 04:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:24.996 "name": "raid_bdev1", 00:12:24.996 "uuid": "572c7d83-8475-4791-b8ea-d0cf2d9d2cff", 00:12:24.996 "strip_size_kb": 0, 00:12:24.996 "state": "online", 00:12:24.996 "raid_level": "raid1", 00:12:24.996 "superblock": true, 00:12:24.996 "num_base_bdevs": 2, 00:12:24.996 "num_base_bdevs_discovered": 1, 00:12:24.996 "num_base_bdevs_operational": 1, 00:12:24.996 "base_bdevs_list": [ 00:12:24.996 { 00:12:24.996 "name": null, 00:12:24.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.996 "is_configured": false, 00:12:24.996 "data_offset": 0, 00:12:24.996 "data_size": 63488 00:12:24.996 }, 00:12:24.996 { 00:12:24.996 "name": "BaseBdev2", 00:12:24.996 "uuid": "8d453d86-0fd7-5c15-a9e3-3f91fe59afee", 00:12:24.996 "is_configured": true, 00:12:24.996 "data_offset": 2048, 00:12:24.996 "data_size": 63488 00:12:24.996 } 00:12:24.996 ] 00:12:24.996 }' 00:12:24.996 04:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:24.996 04:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:24.996 04:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:24.996 04:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:24.996 04:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:24.996 04:28:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:12:24.996 04:28:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:24.996 04:28:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:24.996 04:28:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:24.996 04:28:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:25.259 04:28:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:25.259 04:28:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:25.259 04:28:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.259 04:28:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.259 [2024-12-13 04:28:25.016583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:25.259 [2024-12-13 04:28:25.016875] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:25.259 [2024-12-13 04:28:25.016907] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:25.259 request: 00:12:25.259 { 00:12:25.259 "base_bdev": "BaseBdev1", 00:12:25.259 "raid_bdev": "raid_bdev1", 00:12:25.259 "method": "bdev_raid_add_base_bdev", 00:12:25.259 "req_id": 1 00:12:25.259 } 00:12:25.259 Got JSON-RPC error response 00:12:25.259 response: 00:12:25.259 { 00:12:25.259 "code": -22, 00:12:25.259 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:25.259 } 00:12:25.259 04:28:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:25.259 04:28:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:12:25.259 04:28:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:25.259 04:28:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:25.259 04:28:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:25.259 04:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:26.198 04:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:26.198 04:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:26.198 04:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:26.198 04:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.198 04:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.198 04:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:26.198 04:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.198 04:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.198 04:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.198 04:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.198 04:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.198 04:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.198 04:28:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.198 04:28:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:26.198 04:28:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.198 04:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.198 "name": "raid_bdev1", 00:12:26.198 "uuid": "572c7d83-8475-4791-b8ea-d0cf2d9d2cff", 00:12:26.198 "strip_size_kb": 0, 00:12:26.198 "state": "online", 00:12:26.198 "raid_level": "raid1", 00:12:26.198 "superblock": true, 00:12:26.198 "num_base_bdevs": 2, 00:12:26.198 "num_base_bdevs_discovered": 1, 00:12:26.198 "num_base_bdevs_operational": 1, 00:12:26.198 "base_bdevs_list": [ 00:12:26.198 { 00:12:26.198 "name": null, 00:12:26.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.198 "is_configured": false, 00:12:26.198 "data_offset": 0, 00:12:26.198 "data_size": 63488 00:12:26.198 }, 00:12:26.198 { 00:12:26.198 "name": "BaseBdev2", 00:12:26.198 "uuid": "8d453d86-0fd7-5c15-a9e3-3f91fe59afee", 00:12:26.198 "is_configured": true, 00:12:26.198 "data_offset": 2048, 00:12:26.198 "data_size": 63488 00:12:26.198 } 00:12:26.198 ] 00:12:26.198 }' 00:12:26.198 04:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.198 04:28:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:26.768 04:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:26.768 04:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:26.768 04:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:26.768 04:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:26.768 04:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:26.768 04:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.768 04:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.768 04:28:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.768 04:28:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:26.768 04:28:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.768 04:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:26.768 "name": "raid_bdev1", 00:12:26.768 "uuid": "572c7d83-8475-4791-b8ea-d0cf2d9d2cff", 00:12:26.768 "strip_size_kb": 0, 00:12:26.768 "state": "online", 00:12:26.768 "raid_level": "raid1", 00:12:26.768 "superblock": true, 00:12:26.768 "num_base_bdevs": 2, 00:12:26.768 "num_base_bdevs_discovered": 1, 00:12:26.768 "num_base_bdevs_operational": 1, 00:12:26.768 "base_bdevs_list": [ 00:12:26.768 { 00:12:26.768 "name": null, 00:12:26.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.768 "is_configured": false, 00:12:26.768 "data_offset": 0, 00:12:26.768 "data_size": 63488 00:12:26.768 }, 00:12:26.768 { 00:12:26.768 "name": "BaseBdev2", 00:12:26.768 "uuid": "8d453d86-0fd7-5c15-a9e3-3f91fe59afee", 00:12:26.768 "is_configured": true, 00:12:26.768 "data_offset": 2048, 00:12:26.768 "data_size": 63488 00:12:26.768 } 00:12:26.768 ] 00:12:26.768 }' 00:12:26.768 04:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:26.768 04:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:26.768 04:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:26.768 04:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:26.768 04:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 89205 00:12:26.768 04:28:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 89205 ']' 00:12:26.768 04:28:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 89205 00:12:26.768 04:28:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:12:26.768 04:28:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:26.768 04:28:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89205 00:12:26.768 04:28:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:26.768 04:28:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:26.768 killing process with pid 89205 00:12:26.768 Received shutdown signal, test time was about 18.061773 seconds 00:12:26.768 00:12:26.768 Latency(us) 00:12:26.768 [2024-12-13T04:28:26.783Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:26.768 [2024-12-13T04:28:26.783Z] =================================================================================================================== 00:12:26.768 [2024-12-13T04:28:26.783Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:26.768 04:28:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89205' 00:12:26.768 04:28:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 89205 00:12:26.768 [2024-12-13 04:28:26.694539] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:26.768 04:28:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 89205 00:12:26.768 [2024-12-13 04:28:26.694689] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:26.768 [2024-12-13 04:28:26.694758] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:26.768 [2024-12-13 04:28:26.694768] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:12:26.768 [2024-12-13 04:28:26.743168] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:27.337 04:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:27.337 00:12:27.337 real 0m20.023s 00:12:27.337 user 0m26.040s 00:12:27.337 sys 0m2.431s 00:12:27.337 04:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:27.337 04:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.337 ************************************ 00:12:27.337 END TEST raid_rebuild_test_sb_io 00:12:27.337 ************************************ 00:12:27.337 04:28:27 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:27.337 04:28:27 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:12:27.337 04:28:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:27.337 04:28:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:27.337 04:28:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:27.337 ************************************ 00:12:27.337 START TEST raid_rebuild_test 00:12:27.337 ************************************ 00:12:27.337 04:28:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:12:27.337 04:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:27.337 04:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:27.337 04:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:27.337 04:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:27.337 04:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:27.337 04:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:27.337 04:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:27.337 04:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:27.337 04:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:27.337 04:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:27.337 04:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:27.337 04:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:27.337 04:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:27.337 04:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:27.337 04:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:27.337 04:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:27.337 04:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:27.337 04:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:27.338 04:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:27.338 04:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:27.338 04:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:27.338 04:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:27.338 04:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:27.338 04:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:27.338 04:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:27.338 04:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:27.338 04:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:27.338 04:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:27.338 04:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:27.338 04:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=89902 00:12:27.338 04:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:27.338 04:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 89902 00:12:27.338 04:28:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 89902 ']' 00:12:27.338 04:28:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.338 04:28:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:27.338 04:28:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.338 04:28:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:27.338 04:28:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.338 [2024-12-13 04:28:27.239747] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:12:27.338 [2024-12-13 04:28:27.239941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:12:27.338 Zero copy mechanism will not be used. 00:12:27.338 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89902 ] 00:12:27.597 [2024-12-13 04:28:27.394636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.597 [2024-12-13 04:28:27.433304] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.597 [2024-12-13 04:28:27.509609] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:27.597 [2024-12-13 04:28:27.509654] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:28.167 04:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:28.167 04:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:12:28.167 04:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:28.167 04:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:28.167 04:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.167 04:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.167 BaseBdev1_malloc 00:12:28.167 04:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.167 04:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:28.167 04:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.167 04:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.167 [2024-12-13 04:28:28.091950] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:28.167 [2024-12-13 04:28:28.092033] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.167 [2024-12-13 04:28:28.092069] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:12:28.167 [2024-12-13 04:28:28.092089] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.167 [2024-12-13 04:28:28.094518] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.167 [2024-12-13 04:28:28.094551] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:28.167 BaseBdev1 00:12:28.167 04:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.167 04:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:28.167 04:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:28.167 04:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.167 04:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.167 BaseBdev2_malloc 00:12:28.167 04:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.167 04:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:28.167 04:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.167 04:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.167 [2024-12-13 04:28:28.126567] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:28.167 [2024-12-13 04:28:28.126618] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.167 [2024-12-13 04:28:28.126645] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:28.167 [2024-12-13 04:28:28.126654] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.167 [2024-12-13 04:28:28.129059] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.167 [2024-12-13 04:28:28.129102] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:28.167 BaseBdev2 00:12:28.167 04:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.167 04:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:28.167 04:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:28.167 04:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.167 04:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.167 BaseBdev3_malloc 00:12:28.167 04:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.167 04:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:28.167 04:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.167 04:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.167 [2024-12-13 04:28:28.161506] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:28.167 [2024-12-13 04:28:28.161633] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.167 [2024-12-13 04:28:28.161682] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:28.167 [2024-12-13 04:28:28.161721] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.167 [2024-12-13 04:28:28.164132] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.167 [2024-12-13 04:28:28.164202] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:28.167 BaseBdev3 00:12:28.167 04:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.167 04:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:28.167 04:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:28.167 04:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.167 04:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.427 BaseBdev4_malloc 00:12:28.427 04:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.427 04:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:28.427 04:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.427 04:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.427 [2024-12-13 04:28:28.212549] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:28.427 [2024-12-13 04:28:28.212617] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.427 [2024-12-13 04:28:28.212656] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:28.427 [2024-12-13 04:28:28.212670] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.427 [2024-12-13 04:28:28.216197] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.427 [2024-12-13 04:28:28.216321] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:28.427 BaseBdev4 00:12:28.427 04:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.427 04:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:28.427 04:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.427 04:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.427 spare_malloc 00:12:28.427 04:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.427 04:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:28.427 04:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.427 04:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.427 spare_delay 00:12:28.427 04:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.427 04:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:28.427 04:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.427 04:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.427 [2024-12-13 04:28:28.260050] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:28.427 [2024-12-13 04:28:28.260095] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.427 [2024-12-13 04:28:28.260117] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:28.427 [2024-12-13 04:28:28.260126] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.427 [2024-12-13 04:28:28.262537] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.427 [2024-12-13 04:28:28.262622] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:28.427 spare 00:12:28.427 04:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.427 04:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:28.427 04:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.427 04:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.427 [2024-12-13 04:28:28.272117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:28.427 [2024-12-13 04:28:28.274260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:28.428 [2024-12-13 04:28:28.274371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:28.428 [2024-12-13 04:28:28.274424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:28.428 [2024-12-13 04:28:28.274543] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:12:28.428 [2024-12-13 04:28:28.274554] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:28.428 [2024-12-13 04:28:28.274818] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:12:28.428 [2024-12-13 04:28:28.274957] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:12:28.428 [2024-12-13 04:28:28.274970] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:12:28.428 [2024-12-13 04:28:28.275091] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:28.428 04:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.428 04:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:28.428 04:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:28.428 04:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:28.428 04:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:28.428 04:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:28.428 04:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:28.428 04:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.428 04:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.428 04:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.428 04:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.428 04:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.428 04:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.428 04:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.428 04:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.428 04:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.428 04:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.428 "name": "raid_bdev1", 00:12:28.428 "uuid": "72ebf355-45b7-47fc-a958-f86686a53a4e", 00:12:28.428 "strip_size_kb": 0, 00:12:28.428 "state": "online", 00:12:28.428 "raid_level": "raid1", 00:12:28.428 "superblock": false, 00:12:28.428 "num_base_bdevs": 4, 00:12:28.428 "num_base_bdevs_discovered": 4, 00:12:28.428 "num_base_bdevs_operational": 4, 00:12:28.428 "base_bdevs_list": [ 00:12:28.428 { 00:12:28.428 "name": "BaseBdev1", 00:12:28.428 "uuid": "29e3550b-8c1e-5567-aca3-68f33d96600d", 00:12:28.428 "is_configured": true, 00:12:28.428 "data_offset": 0, 00:12:28.428 "data_size": 65536 00:12:28.428 }, 00:12:28.428 { 00:12:28.428 "name": "BaseBdev2", 00:12:28.428 "uuid": "838d22ad-b118-57d7-a59f-6bc8c0870054", 00:12:28.428 "is_configured": true, 00:12:28.428 "data_offset": 0, 00:12:28.428 "data_size": 65536 00:12:28.428 }, 00:12:28.428 { 00:12:28.428 "name": "BaseBdev3", 00:12:28.428 "uuid": "e4e85ed0-4f5d-54d1-ae11-a9bad11c0c8e", 00:12:28.428 "is_configured": true, 00:12:28.428 "data_offset": 0, 00:12:28.428 "data_size": 65536 00:12:28.428 }, 00:12:28.428 { 00:12:28.428 "name": "BaseBdev4", 00:12:28.428 "uuid": "7a08cbed-8178-5fc6-bbc2-f057eef1bb4c", 00:12:28.428 "is_configured": true, 00:12:28.428 "data_offset": 0, 00:12:28.428 "data_size": 65536 00:12:28.428 } 00:12:28.428 ] 00:12:28.428 }' 00:12:28.428 04:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.428 04:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.996 04:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:28.996 04:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.996 04:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.996 04:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:28.996 [2024-12-13 04:28:28.715751] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:28.996 04:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.996 04:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:28.997 04:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.997 04:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.997 04:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.997 04:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:28.997 04:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.997 04:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:28.997 04:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:28.997 04:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:28.997 04:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:28.997 04:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:28.997 04:28:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:28.997 04:28:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:28.997 04:28:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:28.997 04:28:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:28.997 04:28:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:28.997 04:28:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:28.997 04:28:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:28.997 04:28:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:28.997 04:28:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:28.997 [2024-12-13 04:28:28.990993] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:12:28.997 /dev/nbd0 00:12:29.255 04:28:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:29.255 04:28:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:29.255 04:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:29.255 04:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:29.255 04:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:29.255 04:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:29.255 04:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:29.255 04:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:29.255 04:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:29.255 04:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:29.255 04:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:29.256 1+0 records in 00:12:29.256 1+0 records out 00:12:29.256 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000370195 s, 11.1 MB/s 00:12:29.256 04:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.256 04:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:29.256 04:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.256 04:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:29.256 04:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:29.256 04:28:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:29.256 04:28:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:29.256 04:28:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:29.256 04:28:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:29.256 04:28:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:34.531 65536+0 records in 00:12:34.531 65536+0 records out 00:12:34.531 33554432 bytes (34 MB, 32 MiB) copied, 5.45876 s, 6.1 MB/s 00:12:34.531 04:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:34.531 04:28:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:34.531 04:28:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:34.531 04:28:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:34.531 04:28:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:34.531 04:28:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:34.531 04:28:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:34.791 04:28:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:34.791 [2024-12-13 04:28:34.720988] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:34.791 04:28:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:34.791 04:28:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:34.791 04:28:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:34.791 04:28:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:34.791 04:28:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:34.791 04:28:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:34.791 04:28:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:34.791 04:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:34.791 04:28:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.791 04:28:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.791 [2024-12-13 04:28:34.741042] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:34.791 04:28:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.791 04:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:34.791 04:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:34.791 04:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.791 04:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:34.791 04:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:34.791 04:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:34.791 04:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.791 04:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.791 04:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.791 04:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.791 04:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.791 04:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.791 04:28:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.791 04:28:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.791 04:28:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.791 04:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.791 "name": "raid_bdev1", 00:12:34.791 "uuid": "72ebf355-45b7-47fc-a958-f86686a53a4e", 00:12:34.791 "strip_size_kb": 0, 00:12:34.791 "state": "online", 00:12:34.791 "raid_level": "raid1", 00:12:34.791 "superblock": false, 00:12:34.791 "num_base_bdevs": 4, 00:12:34.791 "num_base_bdevs_discovered": 3, 00:12:34.791 "num_base_bdevs_operational": 3, 00:12:34.791 "base_bdevs_list": [ 00:12:34.791 { 00:12:34.791 "name": null, 00:12:34.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.791 "is_configured": false, 00:12:34.791 "data_offset": 0, 00:12:34.791 "data_size": 65536 00:12:34.791 }, 00:12:34.791 { 00:12:34.791 "name": "BaseBdev2", 00:12:34.791 "uuid": "838d22ad-b118-57d7-a59f-6bc8c0870054", 00:12:34.791 "is_configured": true, 00:12:34.791 "data_offset": 0, 00:12:34.791 "data_size": 65536 00:12:34.791 }, 00:12:34.791 { 00:12:34.791 "name": "BaseBdev3", 00:12:34.791 "uuid": "e4e85ed0-4f5d-54d1-ae11-a9bad11c0c8e", 00:12:34.791 "is_configured": true, 00:12:34.791 "data_offset": 0, 00:12:34.791 "data_size": 65536 00:12:34.791 }, 00:12:34.791 { 00:12:34.791 "name": "BaseBdev4", 00:12:34.791 "uuid": "7a08cbed-8178-5fc6-bbc2-f057eef1bb4c", 00:12:34.791 "is_configured": true, 00:12:34.791 "data_offset": 0, 00:12:34.791 "data_size": 65536 00:12:34.791 } 00:12:34.791 ] 00:12:34.791 }' 00:12:34.791 04:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.791 04:28:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.368 04:28:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:35.368 04:28:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.368 04:28:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.368 [2024-12-13 04:28:35.224318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:35.368 [2024-12-13 04:28:35.231716] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d063c0 00:12:35.368 04:28:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.368 04:28:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:35.368 [2024-12-13 04:28:35.234046] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:36.309 04:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:36.309 04:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:36.309 04:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:36.309 04:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:36.309 04:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:36.309 04:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.309 04:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.309 04:28:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.309 04:28:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.309 04:28:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.309 04:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:36.309 "name": "raid_bdev1", 00:12:36.309 "uuid": "72ebf355-45b7-47fc-a958-f86686a53a4e", 00:12:36.309 "strip_size_kb": 0, 00:12:36.309 "state": "online", 00:12:36.309 "raid_level": "raid1", 00:12:36.309 "superblock": false, 00:12:36.309 "num_base_bdevs": 4, 00:12:36.309 "num_base_bdevs_discovered": 4, 00:12:36.309 "num_base_bdevs_operational": 4, 00:12:36.309 "process": { 00:12:36.309 "type": "rebuild", 00:12:36.309 "target": "spare", 00:12:36.309 "progress": { 00:12:36.309 "blocks": 20480, 00:12:36.309 "percent": 31 00:12:36.309 } 00:12:36.309 }, 00:12:36.309 "base_bdevs_list": [ 00:12:36.309 { 00:12:36.309 "name": "spare", 00:12:36.309 "uuid": "cdbcc508-0268-5b25-a6b9-eba28bcae2e5", 00:12:36.309 "is_configured": true, 00:12:36.309 "data_offset": 0, 00:12:36.309 "data_size": 65536 00:12:36.309 }, 00:12:36.309 { 00:12:36.309 "name": "BaseBdev2", 00:12:36.309 "uuid": "838d22ad-b118-57d7-a59f-6bc8c0870054", 00:12:36.309 "is_configured": true, 00:12:36.309 "data_offset": 0, 00:12:36.309 "data_size": 65536 00:12:36.309 }, 00:12:36.309 { 00:12:36.309 "name": "BaseBdev3", 00:12:36.309 "uuid": "e4e85ed0-4f5d-54d1-ae11-a9bad11c0c8e", 00:12:36.309 "is_configured": true, 00:12:36.309 "data_offset": 0, 00:12:36.309 "data_size": 65536 00:12:36.309 }, 00:12:36.309 { 00:12:36.309 "name": "BaseBdev4", 00:12:36.309 "uuid": "7a08cbed-8178-5fc6-bbc2-f057eef1bb4c", 00:12:36.309 "is_configured": true, 00:12:36.309 "data_offset": 0, 00:12:36.309 "data_size": 65536 00:12:36.309 } 00:12:36.309 ] 00:12:36.309 }' 00:12:36.309 04:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:36.569 04:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:36.569 04:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:36.569 04:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:36.569 04:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:36.569 04:28:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.569 04:28:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.569 [2024-12-13 04:28:36.373888] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:36.569 [2024-12-13 04:28:36.442579] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:36.569 [2024-12-13 04:28:36.442648] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:36.569 [2024-12-13 04:28:36.442671] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:36.569 [2024-12-13 04:28:36.442680] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:36.569 04:28:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.569 04:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:36.569 04:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:36.569 04:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:36.569 04:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.569 04:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.569 04:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:36.569 04:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.569 04:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.569 04:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.569 04:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.569 04:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.569 04:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.569 04:28:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.569 04:28:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.569 04:28:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.569 04:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.569 "name": "raid_bdev1", 00:12:36.569 "uuid": "72ebf355-45b7-47fc-a958-f86686a53a4e", 00:12:36.569 "strip_size_kb": 0, 00:12:36.569 "state": "online", 00:12:36.569 "raid_level": "raid1", 00:12:36.569 "superblock": false, 00:12:36.569 "num_base_bdevs": 4, 00:12:36.569 "num_base_bdevs_discovered": 3, 00:12:36.569 "num_base_bdevs_operational": 3, 00:12:36.569 "base_bdevs_list": [ 00:12:36.569 { 00:12:36.569 "name": null, 00:12:36.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.569 "is_configured": false, 00:12:36.569 "data_offset": 0, 00:12:36.569 "data_size": 65536 00:12:36.569 }, 00:12:36.569 { 00:12:36.569 "name": "BaseBdev2", 00:12:36.569 "uuid": "838d22ad-b118-57d7-a59f-6bc8c0870054", 00:12:36.569 "is_configured": true, 00:12:36.569 "data_offset": 0, 00:12:36.569 "data_size": 65536 00:12:36.569 }, 00:12:36.569 { 00:12:36.569 "name": "BaseBdev3", 00:12:36.569 "uuid": "e4e85ed0-4f5d-54d1-ae11-a9bad11c0c8e", 00:12:36.569 "is_configured": true, 00:12:36.569 "data_offset": 0, 00:12:36.569 "data_size": 65536 00:12:36.569 }, 00:12:36.569 { 00:12:36.569 "name": "BaseBdev4", 00:12:36.569 "uuid": "7a08cbed-8178-5fc6-bbc2-f057eef1bb4c", 00:12:36.569 "is_configured": true, 00:12:36.569 "data_offset": 0, 00:12:36.569 "data_size": 65536 00:12:36.569 } 00:12:36.569 ] 00:12:36.569 }' 00:12:36.569 04:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.569 04:28:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.140 04:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:37.140 04:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:37.140 04:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:37.140 04:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:37.140 04:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:37.140 04:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.140 04:28:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.140 04:28:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.140 04:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.140 04:28:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.140 04:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:37.140 "name": "raid_bdev1", 00:12:37.140 "uuid": "72ebf355-45b7-47fc-a958-f86686a53a4e", 00:12:37.140 "strip_size_kb": 0, 00:12:37.140 "state": "online", 00:12:37.140 "raid_level": "raid1", 00:12:37.140 "superblock": false, 00:12:37.140 "num_base_bdevs": 4, 00:12:37.140 "num_base_bdevs_discovered": 3, 00:12:37.140 "num_base_bdevs_operational": 3, 00:12:37.140 "base_bdevs_list": [ 00:12:37.140 { 00:12:37.140 "name": null, 00:12:37.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.140 "is_configured": false, 00:12:37.140 "data_offset": 0, 00:12:37.140 "data_size": 65536 00:12:37.140 }, 00:12:37.140 { 00:12:37.140 "name": "BaseBdev2", 00:12:37.140 "uuid": "838d22ad-b118-57d7-a59f-6bc8c0870054", 00:12:37.140 "is_configured": true, 00:12:37.140 "data_offset": 0, 00:12:37.140 "data_size": 65536 00:12:37.140 }, 00:12:37.140 { 00:12:37.140 "name": "BaseBdev3", 00:12:37.140 "uuid": "e4e85ed0-4f5d-54d1-ae11-a9bad11c0c8e", 00:12:37.140 "is_configured": true, 00:12:37.140 "data_offset": 0, 00:12:37.140 "data_size": 65536 00:12:37.140 }, 00:12:37.140 { 00:12:37.140 "name": "BaseBdev4", 00:12:37.140 "uuid": "7a08cbed-8178-5fc6-bbc2-f057eef1bb4c", 00:12:37.140 "is_configured": true, 00:12:37.140 "data_offset": 0, 00:12:37.140 "data_size": 65536 00:12:37.140 } 00:12:37.140 ] 00:12:37.140 }' 00:12:37.140 04:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:37.140 04:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:37.140 04:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:37.140 04:28:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:37.140 04:28:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:37.140 04:28:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.140 04:28:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.140 [2024-12-13 04:28:37.013278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:37.140 [2024-12-13 04:28:37.018991] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d06490 00:12:37.140 04:28:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.140 04:28:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:37.140 [2024-12-13 04:28:37.021274] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:38.078 04:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:38.078 04:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:38.078 04:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:38.078 04:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:38.078 04:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:38.078 04:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.078 04:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.078 04:28:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.078 04:28:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.078 04:28:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.078 04:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:38.078 "name": "raid_bdev1", 00:12:38.078 "uuid": "72ebf355-45b7-47fc-a958-f86686a53a4e", 00:12:38.078 "strip_size_kb": 0, 00:12:38.078 "state": "online", 00:12:38.078 "raid_level": "raid1", 00:12:38.078 "superblock": false, 00:12:38.078 "num_base_bdevs": 4, 00:12:38.078 "num_base_bdevs_discovered": 4, 00:12:38.078 "num_base_bdevs_operational": 4, 00:12:38.078 "process": { 00:12:38.078 "type": "rebuild", 00:12:38.078 "target": "spare", 00:12:38.078 "progress": { 00:12:38.078 "blocks": 20480, 00:12:38.078 "percent": 31 00:12:38.078 } 00:12:38.079 }, 00:12:38.079 "base_bdevs_list": [ 00:12:38.079 { 00:12:38.079 "name": "spare", 00:12:38.079 "uuid": "cdbcc508-0268-5b25-a6b9-eba28bcae2e5", 00:12:38.079 "is_configured": true, 00:12:38.079 "data_offset": 0, 00:12:38.079 "data_size": 65536 00:12:38.079 }, 00:12:38.079 { 00:12:38.079 "name": "BaseBdev2", 00:12:38.079 "uuid": "838d22ad-b118-57d7-a59f-6bc8c0870054", 00:12:38.079 "is_configured": true, 00:12:38.079 "data_offset": 0, 00:12:38.079 "data_size": 65536 00:12:38.079 }, 00:12:38.079 { 00:12:38.079 "name": "BaseBdev3", 00:12:38.079 "uuid": "e4e85ed0-4f5d-54d1-ae11-a9bad11c0c8e", 00:12:38.079 "is_configured": true, 00:12:38.079 "data_offset": 0, 00:12:38.079 "data_size": 65536 00:12:38.079 }, 00:12:38.079 { 00:12:38.079 "name": "BaseBdev4", 00:12:38.079 "uuid": "7a08cbed-8178-5fc6-bbc2-f057eef1bb4c", 00:12:38.079 "is_configured": true, 00:12:38.079 "data_offset": 0, 00:12:38.079 "data_size": 65536 00:12:38.079 } 00:12:38.079 ] 00:12:38.079 }' 00:12:38.079 04:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:38.339 04:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:38.339 04:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:38.339 04:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:38.339 04:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:38.339 04:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:38.339 04:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:38.339 04:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:38.339 04:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:38.339 04:28:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.339 04:28:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.339 [2024-12-13 04:28:38.185135] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:38.339 [2024-12-13 04:28:38.228838] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d06490 00:12:38.339 04:28:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.339 04:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:38.339 04:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:38.339 04:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:38.339 04:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:38.339 04:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:38.339 04:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:38.339 04:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:38.339 04:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.339 04:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.339 04:28:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.339 04:28:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.339 04:28:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.339 04:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:38.339 "name": "raid_bdev1", 00:12:38.339 "uuid": "72ebf355-45b7-47fc-a958-f86686a53a4e", 00:12:38.339 "strip_size_kb": 0, 00:12:38.339 "state": "online", 00:12:38.339 "raid_level": "raid1", 00:12:38.339 "superblock": false, 00:12:38.339 "num_base_bdevs": 4, 00:12:38.339 "num_base_bdevs_discovered": 3, 00:12:38.339 "num_base_bdevs_operational": 3, 00:12:38.339 "process": { 00:12:38.339 "type": "rebuild", 00:12:38.339 "target": "spare", 00:12:38.339 "progress": { 00:12:38.339 "blocks": 24576, 00:12:38.339 "percent": 37 00:12:38.339 } 00:12:38.339 }, 00:12:38.339 "base_bdevs_list": [ 00:12:38.339 { 00:12:38.339 "name": "spare", 00:12:38.339 "uuid": "cdbcc508-0268-5b25-a6b9-eba28bcae2e5", 00:12:38.339 "is_configured": true, 00:12:38.339 "data_offset": 0, 00:12:38.339 "data_size": 65536 00:12:38.339 }, 00:12:38.339 { 00:12:38.339 "name": null, 00:12:38.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.339 "is_configured": false, 00:12:38.339 "data_offset": 0, 00:12:38.339 "data_size": 65536 00:12:38.339 }, 00:12:38.339 { 00:12:38.339 "name": "BaseBdev3", 00:12:38.339 "uuid": "e4e85ed0-4f5d-54d1-ae11-a9bad11c0c8e", 00:12:38.339 "is_configured": true, 00:12:38.339 "data_offset": 0, 00:12:38.339 "data_size": 65536 00:12:38.339 }, 00:12:38.339 { 00:12:38.339 "name": "BaseBdev4", 00:12:38.339 "uuid": "7a08cbed-8178-5fc6-bbc2-f057eef1bb4c", 00:12:38.339 "is_configured": true, 00:12:38.339 "data_offset": 0, 00:12:38.339 "data_size": 65536 00:12:38.339 } 00:12:38.339 ] 00:12:38.339 }' 00:12:38.339 04:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:38.339 04:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:38.339 04:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:38.599 04:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:38.599 04:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=367 00:12:38.599 04:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:38.599 04:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:38.599 04:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:38.599 04:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:38.599 04:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:38.599 04:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:38.599 04:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.599 04:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.599 04:28:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.599 04:28:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.599 04:28:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.599 04:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:38.599 "name": "raid_bdev1", 00:12:38.599 "uuid": "72ebf355-45b7-47fc-a958-f86686a53a4e", 00:12:38.599 "strip_size_kb": 0, 00:12:38.599 "state": "online", 00:12:38.599 "raid_level": "raid1", 00:12:38.599 "superblock": false, 00:12:38.599 "num_base_bdevs": 4, 00:12:38.600 "num_base_bdevs_discovered": 3, 00:12:38.600 "num_base_bdevs_operational": 3, 00:12:38.600 "process": { 00:12:38.600 "type": "rebuild", 00:12:38.600 "target": "spare", 00:12:38.600 "progress": { 00:12:38.600 "blocks": 26624, 00:12:38.600 "percent": 40 00:12:38.600 } 00:12:38.600 }, 00:12:38.600 "base_bdevs_list": [ 00:12:38.600 { 00:12:38.600 "name": "spare", 00:12:38.600 "uuid": "cdbcc508-0268-5b25-a6b9-eba28bcae2e5", 00:12:38.600 "is_configured": true, 00:12:38.600 "data_offset": 0, 00:12:38.600 "data_size": 65536 00:12:38.600 }, 00:12:38.600 { 00:12:38.600 "name": null, 00:12:38.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.600 "is_configured": false, 00:12:38.600 "data_offset": 0, 00:12:38.600 "data_size": 65536 00:12:38.600 }, 00:12:38.600 { 00:12:38.600 "name": "BaseBdev3", 00:12:38.600 "uuid": "e4e85ed0-4f5d-54d1-ae11-a9bad11c0c8e", 00:12:38.600 "is_configured": true, 00:12:38.600 "data_offset": 0, 00:12:38.600 "data_size": 65536 00:12:38.600 }, 00:12:38.600 { 00:12:38.600 "name": "BaseBdev4", 00:12:38.600 "uuid": "7a08cbed-8178-5fc6-bbc2-f057eef1bb4c", 00:12:38.600 "is_configured": true, 00:12:38.600 "data_offset": 0, 00:12:38.600 "data_size": 65536 00:12:38.600 } 00:12:38.600 ] 00:12:38.600 }' 00:12:38.600 04:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:38.600 04:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:38.600 04:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:38.600 04:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:38.600 04:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:39.540 04:28:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:39.540 04:28:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:39.540 04:28:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:39.540 04:28:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:39.540 04:28:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:39.540 04:28:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:39.540 04:28:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.540 04:28:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.540 04:28:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.540 04:28:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.540 04:28:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.800 04:28:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:39.800 "name": "raid_bdev1", 00:12:39.800 "uuid": "72ebf355-45b7-47fc-a958-f86686a53a4e", 00:12:39.800 "strip_size_kb": 0, 00:12:39.800 "state": "online", 00:12:39.800 "raid_level": "raid1", 00:12:39.800 "superblock": false, 00:12:39.800 "num_base_bdevs": 4, 00:12:39.800 "num_base_bdevs_discovered": 3, 00:12:39.800 "num_base_bdevs_operational": 3, 00:12:39.800 "process": { 00:12:39.800 "type": "rebuild", 00:12:39.800 "target": "spare", 00:12:39.800 "progress": { 00:12:39.800 "blocks": 49152, 00:12:39.800 "percent": 75 00:12:39.800 } 00:12:39.800 }, 00:12:39.800 "base_bdevs_list": [ 00:12:39.800 { 00:12:39.800 "name": "spare", 00:12:39.800 "uuid": "cdbcc508-0268-5b25-a6b9-eba28bcae2e5", 00:12:39.800 "is_configured": true, 00:12:39.800 "data_offset": 0, 00:12:39.800 "data_size": 65536 00:12:39.800 }, 00:12:39.800 { 00:12:39.800 "name": null, 00:12:39.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.800 "is_configured": false, 00:12:39.800 "data_offset": 0, 00:12:39.800 "data_size": 65536 00:12:39.800 }, 00:12:39.800 { 00:12:39.800 "name": "BaseBdev3", 00:12:39.800 "uuid": "e4e85ed0-4f5d-54d1-ae11-a9bad11c0c8e", 00:12:39.800 "is_configured": true, 00:12:39.800 "data_offset": 0, 00:12:39.800 "data_size": 65536 00:12:39.800 }, 00:12:39.800 { 00:12:39.800 "name": "BaseBdev4", 00:12:39.800 "uuid": "7a08cbed-8178-5fc6-bbc2-f057eef1bb4c", 00:12:39.800 "is_configured": true, 00:12:39.800 "data_offset": 0, 00:12:39.800 "data_size": 65536 00:12:39.800 } 00:12:39.800 ] 00:12:39.800 }' 00:12:39.800 04:28:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:39.800 04:28:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:39.800 04:28:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:39.800 04:28:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:39.800 04:28:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:40.369 [2024-12-13 04:28:40.241604] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:40.369 [2024-12-13 04:28:40.241697] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:40.369 [2024-12-13 04:28:40.241751] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:40.939 04:28:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:40.939 04:28:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:40.939 04:28:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:40.939 04:28:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:40.939 04:28:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:40.939 04:28:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:40.939 04:28:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.939 04:28:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.939 04:28:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.939 04:28:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.939 04:28:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.939 04:28:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:40.939 "name": "raid_bdev1", 00:12:40.939 "uuid": "72ebf355-45b7-47fc-a958-f86686a53a4e", 00:12:40.939 "strip_size_kb": 0, 00:12:40.939 "state": "online", 00:12:40.939 "raid_level": "raid1", 00:12:40.939 "superblock": false, 00:12:40.939 "num_base_bdevs": 4, 00:12:40.939 "num_base_bdevs_discovered": 3, 00:12:40.939 "num_base_bdevs_operational": 3, 00:12:40.939 "base_bdevs_list": [ 00:12:40.939 { 00:12:40.939 "name": "spare", 00:12:40.939 "uuid": "cdbcc508-0268-5b25-a6b9-eba28bcae2e5", 00:12:40.939 "is_configured": true, 00:12:40.939 "data_offset": 0, 00:12:40.939 "data_size": 65536 00:12:40.939 }, 00:12:40.939 { 00:12:40.939 "name": null, 00:12:40.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.939 "is_configured": false, 00:12:40.939 "data_offset": 0, 00:12:40.939 "data_size": 65536 00:12:40.939 }, 00:12:40.939 { 00:12:40.939 "name": "BaseBdev3", 00:12:40.939 "uuid": "e4e85ed0-4f5d-54d1-ae11-a9bad11c0c8e", 00:12:40.939 "is_configured": true, 00:12:40.939 "data_offset": 0, 00:12:40.939 "data_size": 65536 00:12:40.939 }, 00:12:40.939 { 00:12:40.939 "name": "BaseBdev4", 00:12:40.939 "uuid": "7a08cbed-8178-5fc6-bbc2-f057eef1bb4c", 00:12:40.939 "is_configured": true, 00:12:40.939 "data_offset": 0, 00:12:40.939 "data_size": 65536 00:12:40.939 } 00:12:40.939 ] 00:12:40.939 }' 00:12:40.939 04:28:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:40.939 04:28:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:40.939 04:28:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:40.939 04:28:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:40.939 04:28:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:40.939 04:28:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:40.939 04:28:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:40.939 04:28:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:40.939 04:28:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:40.939 04:28:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:40.939 04:28:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.939 04:28:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.939 04:28:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.939 04:28:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.939 04:28:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.939 04:28:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:40.939 "name": "raid_bdev1", 00:12:40.939 "uuid": "72ebf355-45b7-47fc-a958-f86686a53a4e", 00:12:40.939 "strip_size_kb": 0, 00:12:40.939 "state": "online", 00:12:40.939 "raid_level": "raid1", 00:12:40.939 "superblock": false, 00:12:40.939 "num_base_bdevs": 4, 00:12:40.939 "num_base_bdevs_discovered": 3, 00:12:40.939 "num_base_bdevs_operational": 3, 00:12:40.939 "base_bdevs_list": [ 00:12:40.939 { 00:12:40.939 "name": "spare", 00:12:40.939 "uuid": "cdbcc508-0268-5b25-a6b9-eba28bcae2e5", 00:12:40.939 "is_configured": true, 00:12:40.939 "data_offset": 0, 00:12:40.939 "data_size": 65536 00:12:40.939 }, 00:12:40.939 { 00:12:40.939 "name": null, 00:12:40.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.939 "is_configured": false, 00:12:40.939 "data_offset": 0, 00:12:40.939 "data_size": 65536 00:12:40.939 }, 00:12:40.939 { 00:12:40.939 "name": "BaseBdev3", 00:12:40.939 "uuid": "e4e85ed0-4f5d-54d1-ae11-a9bad11c0c8e", 00:12:40.939 "is_configured": true, 00:12:40.939 "data_offset": 0, 00:12:40.939 "data_size": 65536 00:12:40.939 }, 00:12:40.939 { 00:12:40.939 "name": "BaseBdev4", 00:12:40.939 "uuid": "7a08cbed-8178-5fc6-bbc2-f057eef1bb4c", 00:12:40.939 "is_configured": true, 00:12:40.939 "data_offset": 0, 00:12:40.939 "data_size": 65536 00:12:40.939 } 00:12:40.939 ] 00:12:40.939 }' 00:12:40.939 04:28:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:40.939 04:28:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:40.939 04:28:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:40.939 04:28:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:40.939 04:28:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:40.939 04:28:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:40.939 04:28:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:40.939 04:28:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.939 04:28:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.939 04:28:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:40.939 04:28:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.939 04:28:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.939 04:28:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.939 04:28:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.939 04:28:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.939 04:28:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.939 04:28:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.939 04:28:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.199 04:28:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.199 04:28:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.199 "name": "raid_bdev1", 00:12:41.199 "uuid": "72ebf355-45b7-47fc-a958-f86686a53a4e", 00:12:41.199 "strip_size_kb": 0, 00:12:41.199 "state": "online", 00:12:41.199 "raid_level": "raid1", 00:12:41.199 "superblock": false, 00:12:41.199 "num_base_bdevs": 4, 00:12:41.199 "num_base_bdevs_discovered": 3, 00:12:41.199 "num_base_bdevs_operational": 3, 00:12:41.199 "base_bdevs_list": [ 00:12:41.199 { 00:12:41.199 "name": "spare", 00:12:41.199 "uuid": "cdbcc508-0268-5b25-a6b9-eba28bcae2e5", 00:12:41.199 "is_configured": true, 00:12:41.199 "data_offset": 0, 00:12:41.199 "data_size": 65536 00:12:41.199 }, 00:12:41.199 { 00:12:41.199 "name": null, 00:12:41.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.199 "is_configured": false, 00:12:41.199 "data_offset": 0, 00:12:41.199 "data_size": 65536 00:12:41.199 }, 00:12:41.199 { 00:12:41.199 "name": "BaseBdev3", 00:12:41.199 "uuid": "e4e85ed0-4f5d-54d1-ae11-a9bad11c0c8e", 00:12:41.199 "is_configured": true, 00:12:41.199 "data_offset": 0, 00:12:41.199 "data_size": 65536 00:12:41.199 }, 00:12:41.199 { 00:12:41.199 "name": "BaseBdev4", 00:12:41.199 "uuid": "7a08cbed-8178-5fc6-bbc2-f057eef1bb4c", 00:12:41.199 "is_configured": true, 00:12:41.199 "data_offset": 0, 00:12:41.199 "data_size": 65536 00:12:41.199 } 00:12:41.199 ] 00:12:41.199 }' 00:12:41.199 04:28:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.199 04:28:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.459 04:28:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:41.459 04:28:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.459 04:28:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.459 [2024-12-13 04:28:41.389554] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:41.459 [2024-12-13 04:28:41.389583] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:41.459 [2024-12-13 04:28:41.389722] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:41.459 [2024-12-13 04:28:41.389804] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:41.459 [2024-12-13 04:28:41.389817] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:12:41.459 04:28:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.459 04:28:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.459 04:28:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:41.459 04:28:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.459 04:28:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.459 04:28:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.459 04:28:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:41.459 04:28:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:41.459 04:28:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:41.459 04:28:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:41.459 04:28:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:41.459 04:28:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:41.459 04:28:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:41.459 04:28:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:41.459 04:28:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:41.459 04:28:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:41.459 04:28:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:41.459 04:28:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:41.459 04:28:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:41.719 /dev/nbd0 00:12:41.719 04:28:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:41.719 04:28:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:41.719 04:28:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:41.719 04:28:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:41.719 04:28:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:41.719 04:28:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:41.719 04:28:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:41.719 04:28:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:41.719 04:28:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:41.719 04:28:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:41.719 04:28:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:41.719 1+0 records in 00:12:41.719 1+0 records out 00:12:41.719 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000426345 s, 9.6 MB/s 00:12:41.719 04:28:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.719 04:28:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:41.719 04:28:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.719 04:28:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:41.719 04:28:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:41.719 04:28:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:41.719 04:28:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:41.719 04:28:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:41.979 /dev/nbd1 00:12:41.979 04:28:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:41.979 04:28:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:41.979 04:28:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:41.979 04:28:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:41.980 04:28:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:41.980 04:28:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:41.980 04:28:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:41.980 04:28:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:41.980 04:28:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:41.980 04:28:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:41.980 04:28:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:41.980 1+0 records in 00:12:41.980 1+0 records out 00:12:41.980 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260663 s, 15.7 MB/s 00:12:41.980 04:28:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.980 04:28:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:41.980 04:28:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.980 04:28:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:41.980 04:28:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:41.980 04:28:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:41.980 04:28:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:41.980 04:28:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:42.240 04:28:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:42.240 04:28:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:42.240 04:28:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:42.240 04:28:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:42.240 04:28:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:42.240 04:28:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:42.240 04:28:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:42.500 04:28:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:42.500 04:28:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:42.500 04:28:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:42.500 04:28:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:42.500 04:28:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:42.500 04:28:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:42.500 04:28:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:42.500 04:28:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:42.500 04:28:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:42.500 04:28:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:42.500 04:28:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:42.500 04:28:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:42.500 04:28:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:42.500 04:28:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:42.500 04:28:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:42.500 04:28:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:42.500 04:28:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:42.500 04:28:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:42.500 04:28:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:42.500 04:28:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 89902 00:12:42.500 04:28:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 89902 ']' 00:12:42.500 04:28:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 89902 00:12:42.500 04:28:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:12:42.500 04:28:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:42.500 04:28:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89902 00:12:42.760 04:28:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:42.760 killing process with pid 89902 00:12:42.760 Received shutdown signal, test time was about 60.000000 seconds 00:12:42.760 00:12:42.760 Latency(us) 00:12:42.760 [2024-12-13T04:28:42.775Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:42.760 [2024-12-13T04:28:42.775Z] =================================================================================================================== 00:12:42.760 [2024-12-13T04:28:42.775Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:42.760 04:28:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:42.760 04:28:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89902' 00:12:42.760 04:28:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 89902 00:12:42.760 [2024-12-13 04:28:42.535549] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:42.760 04:28:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 89902 00:12:42.760 [2024-12-13 04:28:42.629604] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:43.021 ************************************ 00:12:43.021 END TEST raid_rebuild_test 00:12:43.021 ************************************ 00:12:43.021 04:28:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:43.021 00:12:43.021 real 0m15.804s 00:12:43.021 user 0m17.493s 00:12:43.021 sys 0m3.218s 00:12:43.021 04:28:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:43.021 04:28:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.021 04:28:43 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:12:43.021 04:28:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:43.021 04:28:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:43.021 04:28:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:43.021 ************************************ 00:12:43.021 START TEST raid_rebuild_test_sb 00:12:43.021 ************************************ 00:12:43.021 04:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:12:43.021 04:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:43.021 04:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:43.021 04:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:43.021 04:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:43.021 04:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:43.021 04:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:43.021 04:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:43.021 04:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:43.021 04:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:43.021 04:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:43.021 04:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:43.021 04:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:43.021 04:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:43.021 04:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:43.021 04:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:43.021 04:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:43.021 04:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:43.021 04:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:43.021 04:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:43.021 04:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:43.021 04:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:43.021 04:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:43.021 04:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:43.281 04:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:43.281 04:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:43.281 04:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:43.282 04:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:43.282 04:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:43.282 04:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:43.282 04:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:43.282 04:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=90331 00:12:43.282 04:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:43.282 04:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 90331 00:12:43.282 04:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 90331 ']' 00:12:43.282 04:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:43.282 04:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:43.282 04:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:43.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:43.282 04:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:43.282 04:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.282 [2024-12-13 04:28:43.116720] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:12:43.282 [2024-12-13 04:28:43.116915] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:12:43.282 Zero copy mechanism will not be used. 00:12:43.282 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90331 ] 00:12:43.282 [2024-12-13 04:28:43.272816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.541 [2024-12-13 04:28:43.311957] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.541 [2024-12-13 04:28:43.387958] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:43.541 [2024-12-13 04:28:43.388089] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:44.111 04:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:44.111 04:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:44.111 04:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:44.111 04:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:44.111 04:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.111 04:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.111 BaseBdev1_malloc 00:12:44.111 04:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.111 04:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:44.111 04:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.111 04:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.111 [2024-12-13 04:28:43.980636] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:44.111 [2024-12-13 04:28:43.980780] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.111 [2024-12-13 04:28:43.980818] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:12:44.111 [2024-12-13 04:28:43.980832] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.111 [2024-12-13 04:28:43.983269] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.111 [2024-12-13 04:28:43.983305] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:44.111 BaseBdev1 00:12:44.111 04:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.111 04:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:44.111 04:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:44.111 04:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.111 04:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.111 BaseBdev2_malloc 00:12:44.111 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.111 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:44.111 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.111 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.111 [2024-12-13 04:28:44.015355] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:44.111 [2024-12-13 04:28:44.015506] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.111 [2024-12-13 04:28:44.015538] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:44.111 [2024-12-13 04:28:44.015547] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.111 [2024-12-13 04:28:44.017966] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.111 [2024-12-13 04:28:44.018006] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:44.111 BaseBdev2 00:12:44.111 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.111 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:44.111 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:44.111 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.111 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.111 BaseBdev3_malloc 00:12:44.111 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.111 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:44.111 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.111 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.111 [2024-12-13 04:28:44.049962] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:44.111 [2024-12-13 04:28:44.050091] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.111 [2024-12-13 04:28:44.050124] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:44.111 [2024-12-13 04:28:44.050134] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.111 [2024-12-13 04:28:44.052564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.111 [2024-12-13 04:28:44.052597] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:44.111 BaseBdev3 00:12:44.111 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.111 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:44.111 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:44.111 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.112 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.112 BaseBdev4_malloc 00:12:44.112 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.112 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:44.112 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.112 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.112 [2024-12-13 04:28:44.094755] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:44.112 [2024-12-13 04:28:44.094804] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.112 [2024-12-13 04:28:44.094828] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:44.112 [2024-12-13 04:28:44.094836] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.112 [2024-12-13 04:28:44.097173] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.112 [2024-12-13 04:28:44.097207] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:44.112 BaseBdev4 00:12:44.112 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.112 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:44.112 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.112 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.112 spare_malloc 00:12:44.112 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.112 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:44.112 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.112 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.372 spare_delay 00:12:44.372 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.372 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:44.372 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.372 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.372 [2024-12-13 04:28:44.141097] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:44.372 [2024-12-13 04:28:44.141145] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.372 [2024-12-13 04:28:44.141164] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:44.372 [2024-12-13 04:28:44.141172] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.372 [2024-12-13 04:28:44.143675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.372 [2024-12-13 04:28:44.143708] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:44.372 spare 00:12:44.372 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.372 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:44.372 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.372 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.372 [2024-12-13 04:28:44.153169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:44.372 [2024-12-13 04:28:44.155275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:44.372 [2024-12-13 04:28:44.155335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:44.372 [2024-12-13 04:28:44.155382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:44.372 [2024-12-13 04:28:44.155576] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:12:44.372 [2024-12-13 04:28:44.155592] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:44.372 [2024-12-13 04:28:44.155846] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:12:44.372 [2024-12-13 04:28:44.156012] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:12:44.372 [2024-12-13 04:28:44.156025] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:12:44.372 [2024-12-13 04:28:44.156144] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:44.372 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.372 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:44.372 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:44.372 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:44.372 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:44.372 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:44.372 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:44.372 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.372 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.372 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.372 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.372 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.372 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.372 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.372 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.372 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.372 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.372 "name": "raid_bdev1", 00:12:44.372 "uuid": "d02242b1-4e93-4075-9349-7049980c66ef", 00:12:44.372 "strip_size_kb": 0, 00:12:44.372 "state": "online", 00:12:44.372 "raid_level": "raid1", 00:12:44.372 "superblock": true, 00:12:44.372 "num_base_bdevs": 4, 00:12:44.372 "num_base_bdevs_discovered": 4, 00:12:44.372 "num_base_bdevs_operational": 4, 00:12:44.372 "base_bdevs_list": [ 00:12:44.372 { 00:12:44.372 "name": "BaseBdev1", 00:12:44.372 "uuid": "ba669706-0459-5714-baa6-3b3c37b67173", 00:12:44.372 "is_configured": true, 00:12:44.372 "data_offset": 2048, 00:12:44.372 "data_size": 63488 00:12:44.372 }, 00:12:44.372 { 00:12:44.372 "name": "BaseBdev2", 00:12:44.372 "uuid": "6e4cf874-07a7-5c6d-b76a-434e2db7c32f", 00:12:44.372 "is_configured": true, 00:12:44.372 "data_offset": 2048, 00:12:44.372 "data_size": 63488 00:12:44.372 }, 00:12:44.372 { 00:12:44.372 "name": "BaseBdev3", 00:12:44.372 "uuid": "03353933-a6ee-52e2-a2fa-9a0e077b1329", 00:12:44.372 "is_configured": true, 00:12:44.372 "data_offset": 2048, 00:12:44.372 "data_size": 63488 00:12:44.372 }, 00:12:44.372 { 00:12:44.372 "name": "BaseBdev4", 00:12:44.372 "uuid": "184001b7-36f8-5d32-94a5-702ffc23826a", 00:12:44.372 "is_configured": true, 00:12:44.372 "data_offset": 2048, 00:12:44.372 "data_size": 63488 00:12:44.372 } 00:12:44.372 ] 00:12:44.372 }' 00:12:44.372 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.372 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.632 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:44.632 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:44.632 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.632 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.632 [2024-12-13 04:28:44.628868] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:44.893 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.893 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:44.893 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.893 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:44.893 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.893 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.893 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.893 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:44.893 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:44.893 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:44.893 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:44.893 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:44.893 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:44.893 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:44.893 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:44.893 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:44.893 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:44.893 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:44.893 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:44.893 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:44.893 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:44.893 [2024-12-13 04:28:44.896588] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:12:45.153 /dev/nbd0 00:12:45.153 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:45.153 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:45.153 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:45.153 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:45.153 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:45.153 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:45.153 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:45.153 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:45.153 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:45.153 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:45.153 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:45.153 1+0 records in 00:12:45.153 1+0 records out 00:12:45.153 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440533 s, 9.3 MB/s 00:12:45.153 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:45.153 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:45.153 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:45.153 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:45.153 04:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:45.153 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:45.153 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:45.153 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:45.153 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:45.153 04:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:12:50.429 63488+0 records in 00:12:50.429 63488+0 records out 00:12:50.429 32505856 bytes (33 MB, 31 MiB) copied, 5.31079 s, 6.1 MB/s 00:12:50.429 04:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:50.429 04:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:50.429 04:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:50.429 04:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:50.429 04:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:50.429 04:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:50.429 04:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:50.689 [2024-12-13 04:28:50.467684] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:50.689 04:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:50.689 04:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:50.689 04:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:50.689 04:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:50.689 04:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:50.689 04:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:50.689 04:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:50.689 04:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:50.689 04:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:50.689 04:28:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.689 04:28:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.689 [2024-12-13 04:28:50.511666] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:50.689 04:28:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.689 04:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:50.689 04:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:50.689 04:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.689 04:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.689 04:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.689 04:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:50.689 04:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.689 04:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.689 04:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.689 04:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.689 04:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.689 04:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.689 04:28:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.689 04:28:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.689 04:28:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.689 04:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.689 "name": "raid_bdev1", 00:12:50.689 "uuid": "d02242b1-4e93-4075-9349-7049980c66ef", 00:12:50.689 "strip_size_kb": 0, 00:12:50.689 "state": "online", 00:12:50.689 "raid_level": "raid1", 00:12:50.689 "superblock": true, 00:12:50.689 "num_base_bdevs": 4, 00:12:50.689 "num_base_bdevs_discovered": 3, 00:12:50.689 "num_base_bdevs_operational": 3, 00:12:50.689 "base_bdevs_list": [ 00:12:50.689 { 00:12:50.689 "name": null, 00:12:50.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.689 "is_configured": false, 00:12:50.689 "data_offset": 0, 00:12:50.689 "data_size": 63488 00:12:50.689 }, 00:12:50.689 { 00:12:50.689 "name": "BaseBdev2", 00:12:50.689 "uuid": "6e4cf874-07a7-5c6d-b76a-434e2db7c32f", 00:12:50.689 "is_configured": true, 00:12:50.689 "data_offset": 2048, 00:12:50.689 "data_size": 63488 00:12:50.689 }, 00:12:50.689 { 00:12:50.689 "name": "BaseBdev3", 00:12:50.689 "uuid": "03353933-a6ee-52e2-a2fa-9a0e077b1329", 00:12:50.689 "is_configured": true, 00:12:50.689 "data_offset": 2048, 00:12:50.689 "data_size": 63488 00:12:50.689 }, 00:12:50.689 { 00:12:50.689 "name": "BaseBdev4", 00:12:50.689 "uuid": "184001b7-36f8-5d32-94a5-702ffc23826a", 00:12:50.689 "is_configured": true, 00:12:50.689 "data_offset": 2048, 00:12:50.689 "data_size": 63488 00:12:50.689 } 00:12:50.689 ] 00:12:50.689 }' 00:12:50.689 04:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.689 04:28:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.258 04:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:51.258 04:28:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.258 04:28:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.258 [2024-12-13 04:28:50.978856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:51.258 [2024-12-13 04:28:50.986036] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e420 00:12:51.258 04:28:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.258 04:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:51.258 [2024-12-13 04:28:50.988218] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:52.197 04:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:52.197 04:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.197 04:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:52.197 04:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:52.197 04:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.197 04:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.197 04:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.197 04:28:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.197 04:28:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.197 04:28:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.197 04:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.197 "name": "raid_bdev1", 00:12:52.197 "uuid": "d02242b1-4e93-4075-9349-7049980c66ef", 00:12:52.197 "strip_size_kb": 0, 00:12:52.197 "state": "online", 00:12:52.197 "raid_level": "raid1", 00:12:52.197 "superblock": true, 00:12:52.197 "num_base_bdevs": 4, 00:12:52.197 "num_base_bdevs_discovered": 4, 00:12:52.197 "num_base_bdevs_operational": 4, 00:12:52.197 "process": { 00:12:52.197 "type": "rebuild", 00:12:52.197 "target": "spare", 00:12:52.197 "progress": { 00:12:52.197 "blocks": 20480, 00:12:52.197 "percent": 32 00:12:52.197 } 00:12:52.197 }, 00:12:52.197 "base_bdevs_list": [ 00:12:52.197 { 00:12:52.197 "name": "spare", 00:12:52.197 "uuid": "4c8d5e56-37be-5e97-b8f8-6653ffdc0980", 00:12:52.197 "is_configured": true, 00:12:52.197 "data_offset": 2048, 00:12:52.197 "data_size": 63488 00:12:52.197 }, 00:12:52.197 { 00:12:52.197 "name": "BaseBdev2", 00:12:52.197 "uuid": "6e4cf874-07a7-5c6d-b76a-434e2db7c32f", 00:12:52.197 "is_configured": true, 00:12:52.197 "data_offset": 2048, 00:12:52.197 "data_size": 63488 00:12:52.197 }, 00:12:52.197 { 00:12:52.197 "name": "BaseBdev3", 00:12:52.197 "uuid": "03353933-a6ee-52e2-a2fa-9a0e077b1329", 00:12:52.197 "is_configured": true, 00:12:52.197 "data_offset": 2048, 00:12:52.197 "data_size": 63488 00:12:52.197 }, 00:12:52.197 { 00:12:52.197 "name": "BaseBdev4", 00:12:52.197 "uuid": "184001b7-36f8-5d32-94a5-702ffc23826a", 00:12:52.197 "is_configured": true, 00:12:52.197 "data_offset": 2048, 00:12:52.197 "data_size": 63488 00:12:52.197 } 00:12:52.197 ] 00:12:52.197 }' 00:12:52.197 04:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.197 04:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:52.197 04:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:52.197 04:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:52.197 04:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:52.197 04:28:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.197 04:28:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.197 [2024-12-13 04:28:52.148578] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:52.197 [2024-12-13 04:28:52.196402] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:52.197 [2024-12-13 04:28:52.196549] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:52.197 [2024-12-13 04:28:52.196573] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:52.197 [2024-12-13 04:28:52.196582] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:52.197 04:28:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.197 04:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:52.197 04:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:52.197 04:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:52.197 04:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:52.197 04:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:52.197 04:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:52.197 04:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.197 04:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.197 04:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.197 04:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.457 04:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.457 04:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.457 04:28:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.457 04:28:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.457 04:28:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.457 04:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.457 "name": "raid_bdev1", 00:12:52.457 "uuid": "d02242b1-4e93-4075-9349-7049980c66ef", 00:12:52.457 "strip_size_kb": 0, 00:12:52.457 "state": "online", 00:12:52.457 "raid_level": "raid1", 00:12:52.457 "superblock": true, 00:12:52.457 "num_base_bdevs": 4, 00:12:52.457 "num_base_bdevs_discovered": 3, 00:12:52.457 "num_base_bdevs_operational": 3, 00:12:52.457 "base_bdevs_list": [ 00:12:52.457 { 00:12:52.457 "name": null, 00:12:52.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.457 "is_configured": false, 00:12:52.457 "data_offset": 0, 00:12:52.457 "data_size": 63488 00:12:52.457 }, 00:12:52.457 { 00:12:52.457 "name": "BaseBdev2", 00:12:52.457 "uuid": "6e4cf874-07a7-5c6d-b76a-434e2db7c32f", 00:12:52.457 "is_configured": true, 00:12:52.457 "data_offset": 2048, 00:12:52.457 "data_size": 63488 00:12:52.457 }, 00:12:52.457 { 00:12:52.457 "name": "BaseBdev3", 00:12:52.457 "uuid": "03353933-a6ee-52e2-a2fa-9a0e077b1329", 00:12:52.457 "is_configured": true, 00:12:52.457 "data_offset": 2048, 00:12:52.457 "data_size": 63488 00:12:52.457 }, 00:12:52.457 { 00:12:52.457 "name": "BaseBdev4", 00:12:52.457 "uuid": "184001b7-36f8-5d32-94a5-702ffc23826a", 00:12:52.457 "is_configured": true, 00:12:52.457 "data_offset": 2048, 00:12:52.457 "data_size": 63488 00:12:52.457 } 00:12:52.457 ] 00:12:52.457 }' 00:12:52.457 04:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.457 04:28:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.717 04:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:52.717 04:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.717 04:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:52.717 04:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:52.717 04:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.717 04:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.717 04:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.717 04:28:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.717 04:28:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.717 04:28:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.717 04:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.717 "name": "raid_bdev1", 00:12:52.717 "uuid": "d02242b1-4e93-4075-9349-7049980c66ef", 00:12:52.717 "strip_size_kb": 0, 00:12:52.717 "state": "online", 00:12:52.717 "raid_level": "raid1", 00:12:52.717 "superblock": true, 00:12:52.717 "num_base_bdevs": 4, 00:12:52.717 "num_base_bdevs_discovered": 3, 00:12:52.717 "num_base_bdevs_operational": 3, 00:12:52.717 "base_bdevs_list": [ 00:12:52.717 { 00:12:52.717 "name": null, 00:12:52.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.717 "is_configured": false, 00:12:52.717 "data_offset": 0, 00:12:52.717 "data_size": 63488 00:12:52.717 }, 00:12:52.717 { 00:12:52.717 "name": "BaseBdev2", 00:12:52.717 "uuid": "6e4cf874-07a7-5c6d-b76a-434e2db7c32f", 00:12:52.717 "is_configured": true, 00:12:52.717 "data_offset": 2048, 00:12:52.717 "data_size": 63488 00:12:52.717 }, 00:12:52.717 { 00:12:52.717 "name": "BaseBdev3", 00:12:52.717 "uuid": "03353933-a6ee-52e2-a2fa-9a0e077b1329", 00:12:52.717 "is_configured": true, 00:12:52.717 "data_offset": 2048, 00:12:52.717 "data_size": 63488 00:12:52.717 }, 00:12:52.717 { 00:12:52.717 "name": "BaseBdev4", 00:12:52.717 "uuid": "184001b7-36f8-5d32-94a5-702ffc23826a", 00:12:52.717 "is_configured": true, 00:12:52.717 "data_offset": 2048, 00:12:52.717 "data_size": 63488 00:12:52.717 } 00:12:52.717 ] 00:12:52.717 }' 00:12:52.717 04:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.977 04:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:52.977 04:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:52.977 04:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:52.977 04:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:52.977 04:28:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.977 04:28:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.977 [2024-12-13 04:28:52.826192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:52.977 [2024-12-13 04:28:52.831098] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e4f0 00:12:52.977 04:28:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.977 04:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:52.977 [2024-12-13 04:28:52.833323] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:53.917 04:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:53.917 04:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:53.917 04:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:53.917 04:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:53.917 04:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:53.917 04:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.917 04:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.917 04:28:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.917 04:28:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.917 04:28:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.917 04:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:53.917 "name": "raid_bdev1", 00:12:53.917 "uuid": "d02242b1-4e93-4075-9349-7049980c66ef", 00:12:53.917 "strip_size_kb": 0, 00:12:53.917 "state": "online", 00:12:53.917 "raid_level": "raid1", 00:12:53.917 "superblock": true, 00:12:53.917 "num_base_bdevs": 4, 00:12:53.917 "num_base_bdevs_discovered": 4, 00:12:53.917 "num_base_bdevs_operational": 4, 00:12:53.917 "process": { 00:12:53.917 "type": "rebuild", 00:12:53.917 "target": "spare", 00:12:53.917 "progress": { 00:12:53.917 "blocks": 20480, 00:12:53.917 "percent": 32 00:12:53.917 } 00:12:53.917 }, 00:12:53.917 "base_bdevs_list": [ 00:12:53.917 { 00:12:53.917 "name": "spare", 00:12:53.917 "uuid": "4c8d5e56-37be-5e97-b8f8-6653ffdc0980", 00:12:53.917 "is_configured": true, 00:12:53.917 "data_offset": 2048, 00:12:53.917 "data_size": 63488 00:12:53.917 }, 00:12:53.917 { 00:12:53.917 "name": "BaseBdev2", 00:12:53.917 "uuid": "6e4cf874-07a7-5c6d-b76a-434e2db7c32f", 00:12:53.917 "is_configured": true, 00:12:53.917 "data_offset": 2048, 00:12:53.917 "data_size": 63488 00:12:53.917 }, 00:12:53.917 { 00:12:53.917 "name": "BaseBdev3", 00:12:53.917 "uuid": "03353933-a6ee-52e2-a2fa-9a0e077b1329", 00:12:53.917 "is_configured": true, 00:12:53.917 "data_offset": 2048, 00:12:53.917 "data_size": 63488 00:12:53.917 }, 00:12:53.917 { 00:12:53.917 "name": "BaseBdev4", 00:12:53.917 "uuid": "184001b7-36f8-5d32-94a5-702ffc23826a", 00:12:53.917 "is_configured": true, 00:12:53.917 "data_offset": 2048, 00:12:53.917 "data_size": 63488 00:12:53.917 } 00:12:53.917 ] 00:12:53.917 }' 00:12:53.917 04:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:54.177 04:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:54.177 04:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:54.177 04:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:54.177 04:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:54.177 04:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:54.177 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:54.177 04:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:54.177 04:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:54.177 04:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:54.177 04:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:54.177 04:28:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.177 04:28:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.177 [2024-12-13 04:28:53.982596] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:54.177 [2024-12-13 04:28:54.140572] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000c3e4f0 00:12:54.177 04:28:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.177 04:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:54.177 04:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:54.177 04:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:54.177 04:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:54.177 04:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:54.177 04:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:54.177 04:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:54.177 04:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.177 04:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.177 04:28:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.177 04:28:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.177 04:28:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.436 04:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:54.436 "name": "raid_bdev1", 00:12:54.436 "uuid": "d02242b1-4e93-4075-9349-7049980c66ef", 00:12:54.436 "strip_size_kb": 0, 00:12:54.436 "state": "online", 00:12:54.436 "raid_level": "raid1", 00:12:54.436 "superblock": true, 00:12:54.436 "num_base_bdevs": 4, 00:12:54.436 "num_base_bdevs_discovered": 3, 00:12:54.436 "num_base_bdevs_operational": 3, 00:12:54.436 "process": { 00:12:54.436 "type": "rebuild", 00:12:54.436 "target": "spare", 00:12:54.436 "progress": { 00:12:54.436 "blocks": 24576, 00:12:54.436 "percent": 38 00:12:54.436 } 00:12:54.436 }, 00:12:54.436 "base_bdevs_list": [ 00:12:54.436 { 00:12:54.436 "name": "spare", 00:12:54.436 "uuid": "4c8d5e56-37be-5e97-b8f8-6653ffdc0980", 00:12:54.437 "is_configured": true, 00:12:54.437 "data_offset": 2048, 00:12:54.437 "data_size": 63488 00:12:54.437 }, 00:12:54.437 { 00:12:54.437 "name": null, 00:12:54.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.437 "is_configured": false, 00:12:54.437 "data_offset": 0, 00:12:54.437 "data_size": 63488 00:12:54.437 }, 00:12:54.437 { 00:12:54.437 "name": "BaseBdev3", 00:12:54.437 "uuid": "03353933-a6ee-52e2-a2fa-9a0e077b1329", 00:12:54.437 "is_configured": true, 00:12:54.437 "data_offset": 2048, 00:12:54.437 "data_size": 63488 00:12:54.437 }, 00:12:54.437 { 00:12:54.437 "name": "BaseBdev4", 00:12:54.437 "uuid": "184001b7-36f8-5d32-94a5-702ffc23826a", 00:12:54.437 "is_configured": true, 00:12:54.437 "data_offset": 2048, 00:12:54.437 "data_size": 63488 00:12:54.437 } 00:12:54.437 ] 00:12:54.437 }' 00:12:54.437 04:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:54.437 04:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:54.437 04:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:54.437 04:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:54.437 04:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=383 00:12:54.437 04:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:54.437 04:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:54.437 04:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:54.437 04:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:54.437 04:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:54.437 04:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:54.437 04:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.437 04:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.437 04:28:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.437 04:28:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.437 04:28:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.437 04:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:54.437 "name": "raid_bdev1", 00:12:54.437 "uuid": "d02242b1-4e93-4075-9349-7049980c66ef", 00:12:54.437 "strip_size_kb": 0, 00:12:54.437 "state": "online", 00:12:54.437 "raid_level": "raid1", 00:12:54.437 "superblock": true, 00:12:54.437 "num_base_bdevs": 4, 00:12:54.437 "num_base_bdevs_discovered": 3, 00:12:54.437 "num_base_bdevs_operational": 3, 00:12:54.437 "process": { 00:12:54.437 "type": "rebuild", 00:12:54.437 "target": "spare", 00:12:54.437 "progress": { 00:12:54.437 "blocks": 26624, 00:12:54.437 "percent": 41 00:12:54.437 } 00:12:54.437 }, 00:12:54.437 "base_bdevs_list": [ 00:12:54.437 { 00:12:54.437 "name": "spare", 00:12:54.437 "uuid": "4c8d5e56-37be-5e97-b8f8-6653ffdc0980", 00:12:54.437 "is_configured": true, 00:12:54.437 "data_offset": 2048, 00:12:54.437 "data_size": 63488 00:12:54.437 }, 00:12:54.437 { 00:12:54.437 "name": null, 00:12:54.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.437 "is_configured": false, 00:12:54.437 "data_offset": 0, 00:12:54.437 "data_size": 63488 00:12:54.437 }, 00:12:54.437 { 00:12:54.437 "name": "BaseBdev3", 00:12:54.437 "uuid": "03353933-a6ee-52e2-a2fa-9a0e077b1329", 00:12:54.437 "is_configured": true, 00:12:54.437 "data_offset": 2048, 00:12:54.437 "data_size": 63488 00:12:54.437 }, 00:12:54.437 { 00:12:54.437 "name": "BaseBdev4", 00:12:54.437 "uuid": "184001b7-36f8-5d32-94a5-702ffc23826a", 00:12:54.437 "is_configured": true, 00:12:54.437 "data_offset": 2048, 00:12:54.437 "data_size": 63488 00:12:54.437 } 00:12:54.437 ] 00:12:54.437 }' 00:12:54.437 04:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:54.437 04:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:54.437 04:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:54.437 04:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:54.437 04:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:55.819 04:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:55.819 04:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:55.819 04:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:55.819 04:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:55.819 04:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:55.819 04:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:55.819 04:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.819 04:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.819 04:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.819 04:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.819 04:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.819 04:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:55.819 "name": "raid_bdev1", 00:12:55.819 "uuid": "d02242b1-4e93-4075-9349-7049980c66ef", 00:12:55.819 "strip_size_kb": 0, 00:12:55.819 "state": "online", 00:12:55.819 "raid_level": "raid1", 00:12:55.819 "superblock": true, 00:12:55.819 "num_base_bdevs": 4, 00:12:55.819 "num_base_bdevs_discovered": 3, 00:12:55.819 "num_base_bdevs_operational": 3, 00:12:55.819 "process": { 00:12:55.819 "type": "rebuild", 00:12:55.819 "target": "spare", 00:12:55.819 "progress": { 00:12:55.819 "blocks": 51200, 00:12:55.819 "percent": 80 00:12:55.819 } 00:12:55.819 }, 00:12:55.819 "base_bdevs_list": [ 00:12:55.819 { 00:12:55.819 "name": "spare", 00:12:55.819 "uuid": "4c8d5e56-37be-5e97-b8f8-6653ffdc0980", 00:12:55.819 "is_configured": true, 00:12:55.819 "data_offset": 2048, 00:12:55.819 "data_size": 63488 00:12:55.819 }, 00:12:55.819 { 00:12:55.819 "name": null, 00:12:55.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.819 "is_configured": false, 00:12:55.819 "data_offset": 0, 00:12:55.819 "data_size": 63488 00:12:55.819 }, 00:12:55.819 { 00:12:55.819 "name": "BaseBdev3", 00:12:55.819 "uuid": "03353933-a6ee-52e2-a2fa-9a0e077b1329", 00:12:55.819 "is_configured": true, 00:12:55.819 "data_offset": 2048, 00:12:55.819 "data_size": 63488 00:12:55.819 }, 00:12:55.819 { 00:12:55.819 "name": "BaseBdev4", 00:12:55.819 "uuid": "184001b7-36f8-5d32-94a5-702ffc23826a", 00:12:55.819 "is_configured": true, 00:12:55.819 "data_offset": 2048, 00:12:55.819 "data_size": 63488 00:12:55.819 } 00:12:55.819 ] 00:12:55.819 }' 00:12:55.819 04:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:55.819 04:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:55.819 04:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:55.819 04:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:55.819 04:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:56.079 [2024-12-13 04:28:56.052131] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:56.079 [2024-12-13 04:28:56.052273] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:56.079 [2024-12-13 04:28:56.052418] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.649 04:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:56.649 04:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:56.649 04:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:56.649 04:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:56.649 04:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:56.649 04:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:56.649 04:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.649 04:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.649 04:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.649 04:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.649 04:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.649 04:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:56.649 "name": "raid_bdev1", 00:12:56.649 "uuid": "d02242b1-4e93-4075-9349-7049980c66ef", 00:12:56.649 "strip_size_kb": 0, 00:12:56.649 "state": "online", 00:12:56.649 "raid_level": "raid1", 00:12:56.649 "superblock": true, 00:12:56.649 "num_base_bdevs": 4, 00:12:56.649 "num_base_bdevs_discovered": 3, 00:12:56.649 "num_base_bdevs_operational": 3, 00:12:56.649 "base_bdevs_list": [ 00:12:56.649 { 00:12:56.649 "name": "spare", 00:12:56.649 "uuid": "4c8d5e56-37be-5e97-b8f8-6653ffdc0980", 00:12:56.649 "is_configured": true, 00:12:56.649 "data_offset": 2048, 00:12:56.649 "data_size": 63488 00:12:56.649 }, 00:12:56.649 { 00:12:56.649 "name": null, 00:12:56.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.649 "is_configured": false, 00:12:56.649 "data_offset": 0, 00:12:56.649 "data_size": 63488 00:12:56.649 }, 00:12:56.649 { 00:12:56.649 "name": "BaseBdev3", 00:12:56.649 "uuid": "03353933-a6ee-52e2-a2fa-9a0e077b1329", 00:12:56.649 "is_configured": true, 00:12:56.649 "data_offset": 2048, 00:12:56.649 "data_size": 63488 00:12:56.649 }, 00:12:56.649 { 00:12:56.649 "name": "BaseBdev4", 00:12:56.649 "uuid": "184001b7-36f8-5d32-94a5-702ffc23826a", 00:12:56.649 "is_configured": true, 00:12:56.649 "data_offset": 2048, 00:12:56.649 "data_size": 63488 00:12:56.649 } 00:12:56.649 ] 00:12:56.649 }' 00:12:56.649 04:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:56.909 04:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:56.909 04:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:56.909 04:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:56.909 04:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:56.909 04:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:56.909 04:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:56.909 04:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:56.909 04:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:56.909 04:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:56.909 04:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.909 04:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.909 04:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.909 04:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.909 04:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.909 04:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:56.909 "name": "raid_bdev1", 00:12:56.909 "uuid": "d02242b1-4e93-4075-9349-7049980c66ef", 00:12:56.909 "strip_size_kb": 0, 00:12:56.909 "state": "online", 00:12:56.909 "raid_level": "raid1", 00:12:56.909 "superblock": true, 00:12:56.909 "num_base_bdevs": 4, 00:12:56.909 "num_base_bdevs_discovered": 3, 00:12:56.909 "num_base_bdevs_operational": 3, 00:12:56.909 "base_bdevs_list": [ 00:12:56.909 { 00:12:56.909 "name": "spare", 00:12:56.909 "uuid": "4c8d5e56-37be-5e97-b8f8-6653ffdc0980", 00:12:56.909 "is_configured": true, 00:12:56.909 "data_offset": 2048, 00:12:56.909 "data_size": 63488 00:12:56.909 }, 00:12:56.909 { 00:12:56.909 "name": null, 00:12:56.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.909 "is_configured": false, 00:12:56.909 "data_offset": 0, 00:12:56.909 "data_size": 63488 00:12:56.909 }, 00:12:56.909 { 00:12:56.909 "name": "BaseBdev3", 00:12:56.909 "uuid": "03353933-a6ee-52e2-a2fa-9a0e077b1329", 00:12:56.909 "is_configured": true, 00:12:56.909 "data_offset": 2048, 00:12:56.909 "data_size": 63488 00:12:56.909 }, 00:12:56.909 { 00:12:56.909 "name": "BaseBdev4", 00:12:56.909 "uuid": "184001b7-36f8-5d32-94a5-702ffc23826a", 00:12:56.909 "is_configured": true, 00:12:56.909 "data_offset": 2048, 00:12:56.909 "data_size": 63488 00:12:56.909 } 00:12:56.909 ] 00:12:56.909 }' 00:12:56.909 04:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:56.909 04:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:56.909 04:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:56.909 04:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:56.909 04:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:56.909 04:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:56.909 04:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:56.909 04:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:56.909 04:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:56.909 04:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:56.909 04:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.909 04:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.909 04:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.909 04:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.909 04:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.909 04:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.909 04:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.909 04:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.169 04:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.169 04:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.169 "name": "raid_bdev1", 00:12:57.169 "uuid": "d02242b1-4e93-4075-9349-7049980c66ef", 00:12:57.169 "strip_size_kb": 0, 00:12:57.169 "state": "online", 00:12:57.169 "raid_level": "raid1", 00:12:57.169 "superblock": true, 00:12:57.169 "num_base_bdevs": 4, 00:12:57.169 "num_base_bdevs_discovered": 3, 00:12:57.169 "num_base_bdevs_operational": 3, 00:12:57.169 "base_bdevs_list": [ 00:12:57.169 { 00:12:57.169 "name": "spare", 00:12:57.169 "uuid": "4c8d5e56-37be-5e97-b8f8-6653ffdc0980", 00:12:57.169 "is_configured": true, 00:12:57.169 "data_offset": 2048, 00:12:57.169 "data_size": 63488 00:12:57.169 }, 00:12:57.169 { 00:12:57.169 "name": null, 00:12:57.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.169 "is_configured": false, 00:12:57.169 "data_offset": 0, 00:12:57.169 "data_size": 63488 00:12:57.169 }, 00:12:57.169 { 00:12:57.169 "name": "BaseBdev3", 00:12:57.169 "uuid": "03353933-a6ee-52e2-a2fa-9a0e077b1329", 00:12:57.169 "is_configured": true, 00:12:57.169 "data_offset": 2048, 00:12:57.169 "data_size": 63488 00:12:57.169 }, 00:12:57.169 { 00:12:57.169 "name": "BaseBdev4", 00:12:57.169 "uuid": "184001b7-36f8-5d32-94a5-702ffc23826a", 00:12:57.169 "is_configured": true, 00:12:57.169 "data_offset": 2048, 00:12:57.169 "data_size": 63488 00:12:57.169 } 00:12:57.169 ] 00:12:57.169 }' 00:12:57.169 04:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.169 04:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.428 04:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:57.428 04:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.428 04:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.428 [2024-12-13 04:28:57.388531] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:57.428 [2024-12-13 04:28:57.388558] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:57.428 [2024-12-13 04:28:57.388655] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:57.428 [2024-12-13 04:28:57.388722] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:57.428 [2024-12-13 04:28:57.388740] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:12:57.428 04:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.428 04:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.428 04:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:57.428 04:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.428 04:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.428 04:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.428 04:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:57.688 04:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:57.688 04:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:57.688 04:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:57.688 04:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:57.688 04:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:57.688 04:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:57.688 04:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:57.688 04:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:57.688 04:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:57.688 04:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:57.688 04:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:57.688 04:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:57.688 /dev/nbd0 00:12:57.688 04:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:57.688 04:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:57.688 04:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:57.688 04:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:57.688 04:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:57.688 04:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:57.688 04:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:57.688 04:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:57.688 04:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:57.688 04:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:57.688 04:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:57.688 1+0 records in 00:12:57.688 1+0 records out 00:12:57.688 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000377307 s, 10.9 MB/s 00:12:57.688 04:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.688 04:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:57.689 04:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.689 04:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:57.689 04:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:57.689 04:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:57.689 04:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:57.689 04:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:57.949 /dev/nbd1 00:12:57.949 04:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:57.949 04:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:57.949 04:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:57.949 04:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:57.949 04:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:57.949 04:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:57.949 04:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:57.949 04:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:57.949 04:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:57.949 04:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:57.949 04:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:57.949 1+0 records in 00:12:57.949 1+0 records out 00:12:57.949 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000423112 s, 9.7 MB/s 00:12:57.949 04:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.949 04:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:57.949 04:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.949 04:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:57.949 04:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:57.949 04:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:57.949 04:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:57.949 04:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:58.209 04:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:58.210 04:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:58.210 04:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:58.210 04:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:58.210 04:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:58.210 04:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.210 04:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:58.470 04:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:58.470 04:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:58.470 04:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:58.470 04:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.470 04:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.470 04:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:58.470 04:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:58.470 04:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.470 04:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.470 04:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:58.470 04:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:58.470 04:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:58.470 04:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:58.470 04:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.470 04:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.470 04:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:58.470 04:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:58.470 04:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.470 04:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:58.470 04:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:58.470 04:28:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.470 04:28:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.470 04:28:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.470 04:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:58.470 04:28:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.470 04:28:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.730 [2024-12-13 04:28:58.487775] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:58.730 [2024-12-13 04:28:58.487838] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.730 [2024-12-13 04:28:58.487863] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:58.730 [2024-12-13 04:28:58.487877] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.730 [2024-12-13 04:28:58.490358] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.730 [2024-12-13 04:28:58.490398] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:58.730 [2024-12-13 04:28:58.490499] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:58.730 [2024-12-13 04:28:58.490551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:58.730 [2024-12-13 04:28:58.490663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:58.730 [2024-12-13 04:28:58.490768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:58.730 spare 00:12:58.730 04:28:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.730 04:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:58.730 04:28:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.730 04:28:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.730 [2024-12-13 04:28:58.590660] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:12:58.730 [2024-12-13 04:28:58.590695] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:58.730 [2024-12-13 04:28:58.590960] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caeb00 00:12:58.730 [2024-12-13 04:28:58.591112] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:12:58.730 [2024-12-13 04:28:58.591121] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:12:58.730 [2024-12-13 04:28:58.591238] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:58.730 04:28:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.730 04:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:58.730 04:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:58.730 04:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:58.730 04:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:58.730 04:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:58.730 04:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:58.730 04:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.730 04:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.730 04:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.730 04:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.730 04:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.730 04:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.730 04:28:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.730 04:28:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.730 04:28:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.730 04:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.730 "name": "raid_bdev1", 00:12:58.730 "uuid": "d02242b1-4e93-4075-9349-7049980c66ef", 00:12:58.730 "strip_size_kb": 0, 00:12:58.730 "state": "online", 00:12:58.730 "raid_level": "raid1", 00:12:58.730 "superblock": true, 00:12:58.730 "num_base_bdevs": 4, 00:12:58.730 "num_base_bdevs_discovered": 3, 00:12:58.730 "num_base_bdevs_operational": 3, 00:12:58.730 "base_bdevs_list": [ 00:12:58.730 { 00:12:58.730 "name": "spare", 00:12:58.730 "uuid": "4c8d5e56-37be-5e97-b8f8-6653ffdc0980", 00:12:58.730 "is_configured": true, 00:12:58.730 "data_offset": 2048, 00:12:58.730 "data_size": 63488 00:12:58.730 }, 00:12:58.730 { 00:12:58.730 "name": null, 00:12:58.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.730 "is_configured": false, 00:12:58.730 "data_offset": 2048, 00:12:58.730 "data_size": 63488 00:12:58.731 }, 00:12:58.731 { 00:12:58.731 "name": "BaseBdev3", 00:12:58.731 "uuid": "03353933-a6ee-52e2-a2fa-9a0e077b1329", 00:12:58.731 "is_configured": true, 00:12:58.731 "data_offset": 2048, 00:12:58.731 "data_size": 63488 00:12:58.731 }, 00:12:58.731 { 00:12:58.731 "name": "BaseBdev4", 00:12:58.731 "uuid": "184001b7-36f8-5d32-94a5-702ffc23826a", 00:12:58.731 "is_configured": true, 00:12:58.731 "data_offset": 2048, 00:12:58.731 "data_size": 63488 00:12:58.731 } 00:12:58.731 ] 00:12:58.731 }' 00:12:58.731 04:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.731 04:28:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.300 04:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:59.300 04:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:59.301 04:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:59.301 04:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:59.301 04:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:59.301 04:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.301 04:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.301 04:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.301 04:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.301 04:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.301 04:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:59.301 "name": "raid_bdev1", 00:12:59.301 "uuid": "d02242b1-4e93-4075-9349-7049980c66ef", 00:12:59.301 "strip_size_kb": 0, 00:12:59.301 "state": "online", 00:12:59.301 "raid_level": "raid1", 00:12:59.301 "superblock": true, 00:12:59.301 "num_base_bdevs": 4, 00:12:59.301 "num_base_bdevs_discovered": 3, 00:12:59.301 "num_base_bdevs_operational": 3, 00:12:59.301 "base_bdevs_list": [ 00:12:59.301 { 00:12:59.301 "name": "spare", 00:12:59.301 "uuid": "4c8d5e56-37be-5e97-b8f8-6653ffdc0980", 00:12:59.301 "is_configured": true, 00:12:59.301 "data_offset": 2048, 00:12:59.301 "data_size": 63488 00:12:59.301 }, 00:12:59.301 { 00:12:59.301 "name": null, 00:12:59.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.301 "is_configured": false, 00:12:59.301 "data_offset": 2048, 00:12:59.301 "data_size": 63488 00:12:59.301 }, 00:12:59.301 { 00:12:59.301 "name": "BaseBdev3", 00:12:59.301 "uuid": "03353933-a6ee-52e2-a2fa-9a0e077b1329", 00:12:59.301 "is_configured": true, 00:12:59.301 "data_offset": 2048, 00:12:59.301 "data_size": 63488 00:12:59.301 }, 00:12:59.301 { 00:12:59.301 "name": "BaseBdev4", 00:12:59.301 "uuid": "184001b7-36f8-5d32-94a5-702ffc23826a", 00:12:59.301 "is_configured": true, 00:12:59.301 "data_offset": 2048, 00:12:59.301 "data_size": 63488 00:12:59.301 } 00:12:59.301 ] 00:12:59.301 }' 00:12:59.301 04:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:59.301 04:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:59.301 04:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:59.301 04:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:59.301 04:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.301 04:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:59.301 04:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.301 04:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.301 04:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.301 04:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:59.301 04:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:59.301 04:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.301 04:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.301 [2024-12-13 04:28:59.270476] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:59.301 04:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.301 04:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:59.301 04:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.301 04:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.301 04:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.301 04:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.301 04:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:59.301 04:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.301 04:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.301 04:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.301 04:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.301 04:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.301 04:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.301 04:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.301 04:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.301 04:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.301 04:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.301 "name": "raid_bdev1", 00:12:59.301 "uuid": "d02242b1-4e93-4075-9349-7049980c66ef", 00:12:59.301 "strip_size_kb": 0, 00:12:59.301 "state": "online", 00:12:59.301 "raid_level": "raid1", 00:12:59.301 "superblock": true, 00:12:59.301 "num_base_bdevs": 4, 00:12:59.301 "num_base_bdevs_discovered": 2, 00:12:59.301 "num_base_bdevs_operational": 2, 00:12:59.301 "base_bdevs_list": [ 00:12:59.301 { 00:12:59.301 "name": null, 00:12:59.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.301 "is_configured": false, 00:12:59.301 "data_offset": 0, 00:12:59.301 "data_size": 63488 00:12:59.301 }, 00:12:59.301 { 00:12:59.301 "name": null, 00:12:59.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.301 "is_configured": false, 00:12:59.301 "data_offset": 2048, 00:12:59.301 "data_size": 63488 00:12:59.301 }, 00:12:59.301 { 00:12:59.301 "name": "BaseBdev3", 00:12:59.301 "uuid": "03353933-a6ee-52e2-a2fa-9a0e077b1329", 00:12:59.301 "is_configured": true, 00:12:59.301 "data_offset": 2048, 00:12:59.301 "data_size": 63488 00:12:59.301 }, 00:12:59.301 { 00:12:59.301 "name": "BaseBdev4", 00:12:59.301 "uuid": "184001b7-36f8-5d32-94a5-702ffc23826a", 00:12:59.301 "is_configured": true, 00:12:59.301 "data_offset": 2048, 00:12:59.301 "data_size": 63488 00:12:59.301 } 00:12:59.301 ] 00:12:59.301 }' 00:12:59.301 04:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.301 04:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.871 04:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:59.871 04:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.871 04:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.871 [2024-12-13 04:28:59.645819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:59.871 [2024-12-13 04:28:59.645931] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:59.871 [2024-12-13 04:28:59.645954] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:59.871 [2024-12-13 04:28:59.645983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:59.871 [2024-12-13 04:28:59.652921] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caebd0 00:12:59.871 04:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.871 04:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:59.871 [2024-12-13 04:28:59.655111] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:00.811 04:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:00.811 04:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:00.811 04:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:00.811 04:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:00.811 04:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:00.811 04:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.811 04:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.811 04:29:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.811 04:29:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.811 04:29:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.811 04:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:00.811 "name": "raid_bdev1", 00:13:00.811 "uuid": "d02242b1-4e93-4075-9349-7049980c66ef", 00:13:00.811 "strip_size_kb": 0, 00:13:00.811 "state": "online", 00:13:00.811 "raid_level": "raid1", 00:13:00.811 "superblock": true, 00:13:00.811 "num_base_bdevs": 4, 00:13:00.811 "num_base_bdevs_discovered": 3, 00:13:00.811 "num_base_bdevs_operational": 3, 00:13:00.811 "process": { 00:13:00.811 "type": "rebuild", 00:13:00.811 "target": "spare", 00:13:00.811 "progress": { 00:13:00.811 "blocks": 20480, 00:13:00.811 "percent": 32 00:13:00.811 } 00:13:00.811 }, 00:13:00.811 "base_bdevs_list": [ 00:13:00.811 { 00:13:00.811 "name": "spare", 00:13:00.811 "uuid": "4c8d5e56-37be-5e97-b8f8-6653ffdc0980", 00:13:00.811 "is_configured": true, 00:13:00.811 "data_offset": 2048, 00:13:00.811 "data_size": 63488 00:13:00.811 }, 00:13:00.811 { 00:13:00.811 "name": null, 00:13:00.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.811 "is_configured": false, 00:13:00.811 "data_offset": 2048, 00:13:00.811 "data_size": 63488 00:13:00.811 }, 00:13:00.811 { 00:13:00.811 "name": "BaseBdev3", 00:13:00.811 "uuid": "03353933-a6ee-52e2-a2fa-9a0e077b1329", 00:13:00.811 "is_configured": true, 00:13:00.811 "data_offset": 2048, 00:13:00.811 "data_size": 63488 00:13:00.811 }, 00:13:00.811 { 00:13:00.811 "name": "BaseBdev4", 00:13:00.811 "uuid": "184001b7-36f8-5d32-94a5-702ffc23826a", 00:13:00.811 "is_configured": true, 00:13:00.811 "data_offset": 2048, 00:13:00.811 "data_size": 63488 00:13:00.811 } 00:13:00.811 ] 00:13:00.811 }' 00:13:00.811 04:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:00.811 04:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:00.811 04:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:00.812 04:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:00.812 04:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:00.812 04:29:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.812 04:29:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.812 [2024-12-13 04:29:00.810906] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:01.072 [2024-12-13 04:29:00.862474] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:01.072 [2024-12-13 04:29:00.862579] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:01.072 [2024-12-13 04:29:00.862614] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:01.072 [2024-12-13 04:29:00.862638] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:01.072 04:29:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.072 04:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:01.072 04:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.072 04:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.072 04:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:01.072 04:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:01.072 04:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:01.072 04:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.072 04:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.072 04:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.072 04:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.072 04:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.072 04:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.072 04:29:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.072 04:29:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.072 04:29:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.072 04:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.072 "name": "raid_bdev1", 00:13:01.072 "uuid": "d02242b1-4e93-4075-9349-7049980c66ef", 00:13:01.072 "strip_size_kb": 0, 00:13:01.072 "state": "online", 00:13:01.072 "raid_level": "raid1", 00:13:01.072 "superblock": true, 00:13:01.072 "num_base_bdevs": 4, 00:13:01.072 "num_base_bdevs_discovered": 2, 00:13:01.072 "num_base_bdevs_operational": 2, 00:13:01.072 "base_bdevs_list": [ 00:13:01.072 { 00:13:01.072 "name": null, 00:13:01.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.072 "is_configured": false, 00:13:01.072 "data_offset": 0, 00:13:01.072 "data_size": 63488 00:13:01.072 }, 00:13:01.072 { 00:13:01.072 "name": null, 00:13:01.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.072 "is_configured": false, 00:13:01.072 "data_offset": 2048, 00:13:01.072 "data_size": 63488 00:13:01.072 }, 00:13:01.072 { 00:13:01.072 "name": "BaseBdev3", 00:13:01.072 "uuid": "03353933-a6ee-52e2-a2fa-9a0e077b1329", 00:13:01.072 "is_configured": true, 00:13:01.072 "data_offset": 2048, 00:13:01.072 "data_size": 63488 00:13:01.072 }, 00:13:01.072 { 00:13:01.072 "name": "BaseBdev4", 00:13:01.072 "uuid": "184001b7-36f8-5d32-94a5-702ffc23826a", 00:13:01.072 "is_configured": true, 00:13:01.072 "data_offset": 2048, 00:13:01.072 "data_size": 63488 00:13:01.072 } 00:13:01.072 ] 00:13:01.072 }' 00:13:01.072 04:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.072 04:29:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.642 04:29:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:01.642 04:29:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.642 04:29:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.642 [2024-12-13 04:29:01.360541] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:01.642 [2024-12-13 04:29:01.360644] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.642 [2024-12-13 04:29:01.360687] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:13:01.642 [2024-12-13 04:29:01.360717] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.642 [2024-12-13 04:29:01.361214] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.642 [2024-12-13 04:29:01.361280] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:01.642 [2024-12-13 04:29:01.361392] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:01.643 [2024-12-13 04:29:01.361450] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:01.643 [2024-12-13 04:29:01.361510] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:01.643 [2024-12-13 04:29:01.361564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:01.643 [2024-12-13 04:29:01.366057] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caeca0 00:13:01.643 spare 00:13:01.643 04:29:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.643 04:29:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:01.643 [2024-12-13 04:29:01.368242] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:02.582 04:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:02.582 04:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:02.582 04:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:02.582 04:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:02.582 04:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:02.582 04:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.582 04:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.582 04:29:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.582 04:29:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.583 04:29:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.583 04:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:02.583 "name": "raid_bdev1", 00:13:02.583 "uuid": "d02242b1-4e93-4075-9349-7049980c66ef", 00:13:02.583 "strip_size_kb": 0, 00:13:02.583 "state": "online", 00:13:02.583 "raid_level": "raid1", 00:13:02.583 "superblock": true, 00:13:02.583 "num_base_bdevs": 4, 00:13:02.583 "num_base_bdevs_discovered": 3, 00:13:02.583 "num_base_bdevs_operational": 3, 00:13:02.583 "process": { 00:13:02.583 "type": "rebuild", 00:13:02.583 "target": "spare", 00:13:02.583 "progress": { 00:13:02.583 "blocks": 20480, 00:13:02.583 "percent": 32 00:13:02.583 } 00:13:02.583 }, 00:13:02.583 "base_bdevs_list": [ 00:13:02.583 { 00:13:02.583 "name": "spare", 00:13:02.583 "uuid": "4c8d5e56-37be-5e97-b8f8-6653ffdc0980", 00:13:02.583 "is_configured": true, 00:13:02.583 "data_offset": 2048, 00:13:02.583 "data_size": 63488 00:13:02.583 }, 00:13:02.583 { 00:13:02.583 "name": null, 00:13:02.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.583 "is_configured": false, 00:13:02.583 "data_offset": 2048, 00:13:02.583 "data_size": 63488 00:13:02.583 }, 00:13:02.583 { 00:13:02.583 "name": "BaseBdev3", 00:13:02.583 "uuid": "03353933-a6ee-52e2-a2fa-9a0e077b1329", 00:13:02.583 "is_configured": true, 00:13:02.583 "data_offset": 2048, 00:13:02.583 "data_size": 63488 00:13:02.583 }, 00:13:02.583 { 00:13:02.583 "name": "BaseBdev4", 00:13:02.583 "uuid": "184001b7-36f8-5d32-94a5-702ffc23826a", 00:13:02.583 "is_configured": true, 00:13:02.583 "data_offset": 2048, 00:13:02.583 "data_size": 63488 00:13:02.583 } 00:13:02.583 ] 00:13:02.583 }' 00:13:02.583 04:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:02.583 04:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:02.583 04:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:02.583 04:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:02.583 04:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:02.583 04:29:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.583 04:29:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.583 [2024-12-13 04:29:02.524609] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:02.583 [2024-12-13 04:29:02.575690] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:02.583 [2024-12-13 04:29:02.575750] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:02.583 [2024-12-13 04:29:02.575771] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:02.583 [2024-12-13 04:29:02.575778] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:02.583 04:29:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.583 04:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:02.583 04:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:02.583 04:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:02.583 04:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:02.583 04:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:02.583 04:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:02.583 04:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.583 04:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.583 04:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.583 04:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.583 04:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.583 04:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.583 04:29:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.583 04:29:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.843 04:29:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.843 04:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.843 "name": "raid_bdev1", 00:13:02.843 "uuid": "d02242b1-4e93-4075-9349-7049980c66ef", 00:13:02.843 "strip_size_kb": 0, 00:13:02.843 "state": "online", 00:13:02.843 "raid_level": "raid1", 00:13:02.843 "superblock": true, 00:13:02.843 "num_base_bdevs": 4, 00:13:02.843 "num_base_bdevs_discovered": 2, 00:13:02.843 "num_base_bdevs_operational": 2, 00:13:02.843 "base_bdevs_list": [ 00:13:02.843 { 00:13:02.843 "name": null, 00:13:02.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.843 "is_configured": false, 00:13:02.843 "data_offset": 0, 00:13:02.843 "data_size": 63488 00:13:02.843 }, 00:13:02.843 { 00:13:02.843 "name": null, 00:13:02.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.843 "is_configured": false, 00:13:02.843 "data_offset": 2048, 00:13:02.843 "data_size": 63488 00:13:02.843 }, 00:13:02.843 { 00:13:02.843 "name": "BaseBdev3", 00:13:02.843 "uuid": "03353933-a6ee-52e2-a2fa-9a0e077b1329", 00:13:02.843 "is_configured": true, 00:13:02.843 "data_offset": 2048, 00:13:02.843 "data_size": 63488 00:13:02.843 }, 00:13:02.843 { 00:13:02.843 "name": "BaseBdev4", 00:13:02.843 "uuid": "184001b7-36f8-5d32-94a5-702ffc23826a", 00:13:02.843 "is_configured": true, 00:13:02.843 "data_offset": 2048, 00:13:02.843 "data_size": 63488 00:13:02.843 } 00:13:02.843 ] 00:13:02.843 }' 00:13:02.843 04:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.843 04:29:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.103 04:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:03.103 04:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:03.103 04:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:03.103 04:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:03.103 04:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:03.103 04:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.103 04:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.103 04:29:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.103 04:29:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.103 04:29:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.103 04:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:03.103 "name": "raid_bdev1", 00:13:03.103 "uuid": "d02242b1-4e93-4075-9349-7049980c66ef", 00:13:03.103 "strip_size_kb": 0, 00:13:03.103 "state": "online", 00:13:03.103 "raid_level": "raid1", 00:13:03.103 "superblock": true, 00:13:03.103 "num_base_bdevs": 4, 00:13:03.103 "num_base_bdevs_discovered": 2, 00:13:03.103 "num_base_bdevs_operational": 2, 00:13:03.103 "base_bdevs_list": [ 00:13:03.103 { 00:13:03.103 "name": null, 00:13:03.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.103 "is_configured": false, 00:13:03.103 "data_offset": 0, 00:13:03.103 "data_size": 63488 00:13:03.103 }, 00:13:03.103 { 00:13:03.103 "name": null, 00:13:03.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.103 "is_configured": false, 00:13:03.103 "data_offset": 2048, 00:13:03.103 "data_size": 63488 00:13:03.103 }, 00:13:03.103 { 00:13:03.103 "name": "BaseBdev3", 00:13:03.103 "uuid": "03353933-a6ee-52e2-a2fa-9a0e077b1329", 00:13:03.103 "is_configured": true, 00:13:03.103 "data_offset": 2048, 00:13:03.103 "data_size": 63488 00:13:03.103 }, 00:13:03.103 { 00:13:03.103 "name": "BaseBdev4", 00:13:03.103 "uuid": "184001b7-36f8-5d32-94a5-702ffc23826a", 00:13:03.103 "is_configured": true, 00:13:03.103 "data_offset": 2048, 00:13:03.103 "data_size": 63488 00:13:03.103 } 00:13:03.103 ] 00:13:03.103 }' 00:13:03.103 04:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:03.363 04:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:03.363 04:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:03.363 04:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:03.363 04:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:03.363 04:29:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.363 04:29:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.363 04:29:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.363 04:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:03.363 04:29:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.363 04:29:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.363 [2024-12-13 04:29:03.220966] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:03.363 [2024-12-13 04:29:03.221015] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.363 [2024-12-13 04:29:03.221039] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:13:03.363 [2024-12-13 04:29:03.221048] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.363 [2024-12-13 04:29:03.221492] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.363 [2024-12-13 04:29:03.221510] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:03.363 [2024-12-13 04:29:03.221584] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:03.363 [2024-12-13 04:29:03.221598] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:03.363 [2024-12-13 04:29:03.221608] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:03.363 [2024-12-13 04:29:03.221618] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:03.363 BaseBdev1 00:13:03.363 04:29:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.363 04:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:04.313 04:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:04.313 04:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.313 04:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.313 04:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:04.313 04:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:04.313 04:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:04.313 04:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.313 04:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.313 04:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.313 04:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.313 04:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.313 04:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.313 04:29:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.313 04:29:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.313 04:29:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.313 04:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.313 "name": "raid_bdev1", 00:13:04.313 "uuid": "d02242b1-4e93-4075-9349-7049980c66ef", 00:13:04.313 "strip_size_kb": 0, 00:13:04.313 "state": "online", 00:13:04.313 "raid_level": "raid1", 00:13:04.313 "superblock": true, 00:13:04.313 "num_base_bdevs": 4, 00:13:04.313 "num_base_bdevs_discovered": 2, 00:13:04.313 "num_base_bdevs_operational": 2, 00:13:04.313 "base_bdevs_list": [ 00:13:04.313 { 00:13:04.313 "name": null, 00:13:04.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.313 "is_configured": false, 00:13:04.313 "data_offset": 0, 00:13:04.313 "data_size": 63488 00:13:04.313 }, 00:13:04.313 { 00:13:04.313 "name": null, 00:13:04.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.313 "is_configured": false, 00:13:04.313 "data_offset": 2048, 00:13:04.313 "data_size": 63488 00:13:04.313 }, 00:13:04.313 { 00:13:04.313 "name": "BaseBdev3", 00:13:04.313 "uuid": "03353933-a6ee-52e2-a2fa-9a0e077b1329", 00:13:04.313 "is_configured": true, 00:13:04.313 "data_offset": 2048, 00:13:04.313 "data_size": 63488 00:13:04.313 }, 00:13:04.313 { 00:13:04.313 "name": "BaseBdev4", 00:13:04.313 "uuid": "184001b7-36f8-5d32-94a5-702ffc23826a", 00:13:04.313 "is_configured": true, 00:13:04.313 "data_offset": 2048, 00:13:04.313 "data_size": 63488 00:13:04.313 } 00:13:04.313 ] 00:13:04.313 }' 00:13:04.313 04:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.313 04:29:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.918 04:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:04.918 04:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:04.918 04:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:04.918 04:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:04.918 04:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:04.918 04:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.918 04:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.918 04:29:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.918 04:29:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.918 04:29:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.918 04:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:04.918 "name": "raid_bdev1", 00:13:04.918 "uuid": "d02242b1-4e93-4075-9349-7049980c66ef", 00:13:04.918 "strip_size_kb": 0, 00:13:04.918 "state": "online", 00:13:04.918 "raid_level": "raid1", 00:13:04.918 "superblock": true, 00:13:04.918 "num_base_bdevs": 4, 00:13:04.918 "num_base_bdevs_discovered": 2, 00:13:04.918 "num_base_bdevs_operational": 2, 00:13:04.918 "base_bdevs_list": [ 00:13:04.918 { 00:13:04.918 "name": null, 00:13:04.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.918 "is_configured": false, 00:13:04.918 "data_offset": 0, 00:13:04.918 "data_size": 63488 00:13:04.918 }, 00:13:04.918 { 00:13:04.918 "name": null, 00:13:04.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.918 "is_configured": false, 00:13:04.918 "data_offset": 2048, 00:13:04.918 "data_size": 63488 00:13:04.918 }, 00:13:04.918 { 00:13:04.918 "name": "BaseBdev3", 00:13:04.918 "uuid": "03353933-a6ee-52e2-a2fa-9a0e077b1329", 00:13:04.918 "is_configured": true, 00:13:04.918 "data_offset": 2048, 00:13:04.918 "data_size": 63488 00:13:04.918 }, 00:13:04.918 { 00:13:04.918 "name": "BaseBdev4", 00:13:04.918 "uuid": "184001b7-36f8-5d32-94a5-702ffc23826a", 00:13:04.918 "is_configured": true, 00:13:04.918 "data_offset": 2048, 00:13:04.918 "data_size": 63488 00:13:04.918 } 00:13:04.918 ] 00:13:04.918 }' 00:13:04.918 04:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:04.918 04:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:04.918 04:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:04.918 04:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:04.918 04:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:04.918 04:29:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:13:04.918 04:29:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:04.918 04:29:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:04.918 04:29:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:04.918 04:29:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:04.918 04:29:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:04.918 04:29:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:04.918 04:29:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.918 04:29:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.918 [2024-12-13 04:29:04.916571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:04.918 [2024-12-13 04:29:04.916763] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:04.918 [2024-12-13 04:29:04.916784] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:04.918 request: 00:13:04.918 { 00:13:04.918 "base_bdev": "BaseBdev1", 00:13:04.918 "raid_bdev": "raid_bdev1", 00:13:04.918 "method": "bdev_raid_add_base_bdev", 00:13:04.918 "req_id": 1 00:13:04.918 } 00:13:04.918 Got JSON-RPC error response 00:13:04.918 response: 00:13:04.918 { 00:13:04.918 "code": -22, 00:13:04.918 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:04.918 } 00:13:04.918 04:29:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:04.918 04:29:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:13:04.918 04:29:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:04.918 04:29:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:04.918 04:29:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:04.918 04:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:06.311 04:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:06.311 04:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.311 04:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.311 04:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:06.311 04:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:06.311 04:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:06.311 04:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.311 04:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.311 04:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.311 04:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.311 04:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.311 04:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.311 04:29:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.311 04:29:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.311 04:29:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.311 04:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.311 "name": "raid_bdev1", 00:13:06.311 "uuid": "d02242b1-4e93-4075-9349-7049980c66ef", 00:13:06.311 "strip_size_kb": 0, 00:13:06.311 "state": "online", 00:13:06.311 "raid_level": "raid1", 00:13:06.311 "superblock": true, 00:13:06.311 "num_base_bdevs": 4, 00:13:06.311 "num_base_bdevs_discovered": 2, 00:13:06.311 "num_base_bdevs_operational": 2, 00:13:06.311 "base_bdevs_list": [ 00:13:06.311 { 00:13:06.311 "name": null, 00:13:06.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.311 "is_configured": false, 00:13:06.311 "data_offset": 0, 00:13:06.311 "data_size": 63488 00:13:06.311 }, 00:13:06.311 { 00:13:06.311 "name": null, 00:13:06.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.311 "is_configured": false, 00:13:06.311 "data_offset": 2048, 00:13:06.311 "data_size": 63488 00:13:06.311 }, 00:13:06.311 { 00:13:06.311 "name": "BaseBdev3", 00:13:06.311 "uuid": "03353933-a6ee-52e2-a2fa-9a0e077b1329", 00:13:06.311 "is_configured": true, 00:13:06.311 "data_offset": 2048, 00:13:06.311 "data_size": 63488 00:13:06.311 }, 00:13:06.311 { 00:13:06.311 "name": "BaseBdev4", 00:13:06.311 "uuid": "184001b7-36f8-5d32-94a5-702ffc23826a", 00:13:06.311 "is_configured": true, 00:13:06.311 "data_offset": 2048, 00:13:06.311 "data_size": 63488 00:13:06.311 } 00:13:06.311 ] 00:13:06.311 }' 00:13:06.311 04:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.311 04:29:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.571 04:29:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:06.571 04:29:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:06.571 04:29:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:06.571 04:29:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:06.571 04:29:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:06.571 04:29:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.571 04:29:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.571 04:29:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.571 04:29:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.571 04:29:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.571 04:29:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:06.571 "name": "raid_bdev1", 00:13:06.571 "uuid": "d02242b1-4e93-4075-9349-7049980c66ef", 00:13:06.571 "strip_size_kb": 0, 00:13:06.571 "state": "online", 00:13:06.571 "raid_level": "raid1", 00:13:06.571 "superblock": true, 00:13:06.571 "num_base_bdevs": 4, 00:13:06.571 "num_base_bdevs_discovered": 2, 00:13:06.571 "num_base_bdevs_operational": 2, 00:13:06.571 "base_bdevs_list": [ 00:13:06.571 { 00:13:06.571 "name": null, 00:13:06.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.571 "is_configured": false, 00:13:06.571 "data_offset": 0, 00:13:06.571 "data_size": 63488 00:13:06.571 }, 00:13:06.571 { 00:13:06.571 "name": null, 00:13:06.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.571 "is_configured": false, 00:13:06.571 "data_offset": 2048, 00:13:06.571 "data_size": 63488 00:13:06.571 }, 00:13:06.571 { 00:13:06.571 "name": "BaseBdev3", 00:13:06.571 "uuid": "03353933-a6ee-52e2-a2fa-9a0e077b1329", 00:13:06.571 "is_configured": true, 00:13:06.571 "data_offset": 2048, 00:13:06.571 "data_size": 63488 00:13:06.571 }, 00:13:06.571 { 00:13:06.571 "name": "BaseBdev4", 00:13:06.571 "uuid": "184001b7-36f8-5d32-94a5-702ffc23826a", 00:13:06.571 "is_configured": true, 00:13:06.571 "data_offset": 2048, 00:13:06.571 "data_size": 63488 00:13:06.571 } 00:13:06.571 ] 00:13:06.571 }' 00:13:06.571 04:29:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:06.571 04:29:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:06.571 04:29:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:06.571 04:29:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:06.571 04:29:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 90331 00:13:06.571 04:29:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 90331 ']' 00:13:06.571 04:29:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 90331 00:13:06.571 04:29:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:06.571 04:29:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:06.571 04:29:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90331 00:13:06.832 04:29:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:06.832 04:29:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:06.832 04:29:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90331' 00:13:06.832 killing process with pid 90331 00:13:06.832 Received shutdown signal, test time was about 60.000000 seconds 00:13:06.832 00:13:06.832 Latency(us) 00:13:06.832 [2024-12-13T04:29:06.847Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:06.832 [2024-12-13T04:29:06.847Z] =================================================================================================================== 00:13:06.832 [2024-12-13T04:29:06.847Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:06.832 04:29:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 90331 00:13:06.832 [2024-12-13 04:29:06.595858] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:06.832 [2024-12-13 04:29:06.595956] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:06.832 [2024-12-13 04:29:06.596011] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:06.832 [2024-12-13 04:29:06.596027] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:13:06.832 04:29:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 90331 00:13:06.832 [2024-12-13 04:29:06.690414] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:07.092 04:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:07.092 ************************************ 00:13:07.092 END TEST raid_rebuild_test_sb 00:13:07.092 ************************************ 00:13:07.092 00:13:07.092 real 0m23.988s 00:13:07.092 user 0m29.232s 00:13:07.092 sys 0m3.989s 00:13:07.092 04:29:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:07.092 04:29:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.092 04:29:07 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:13:07.092 04:29:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:07.092 04:29:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:07.092 04:29:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:07.092 ************************************ 00:13:07.092 START TEST raid_rebuild_test_io 00:13:07.092 ************************************ 00:13:07.092 04:29:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:13:07.092 04:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:07.092 04:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:07.092 04:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:07.092 04:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:07.092 04:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:07.092 04:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:07.092 04:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:07.092 04:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:07.092 04:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:07.092 04:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:07.092 04:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:07.092 04:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:07.092 04:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:07.092 04:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:07.092 04:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:07.092 04:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:07.092 04:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:07.092 04:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:07.092 04:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:07.092 04:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:07.092 04:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:07.092 04:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:07.092 04:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:07.092 04:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:07.092 04:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:07.092 04:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:07.092 04:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:07.092 04:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:07.092 04:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:07.092 04:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=91074 00:13:07.092 04:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 91074 00:13:07.092 04:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:07.092 04:29:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 91074 ']' 00:13:07.092 04:29:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.092 04:29:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:07.092 04:29:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.092 04:29:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:07.353 04:29:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.353 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:07.353 Zero copy mechanism will not be used. 00:13:07.353 [2024-12-13 04:29:07.183648] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:13:07.353 [2024-12-13 04:29:07.183853] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91074 ] 00:13:07.353 [2024-12-13 04:29:07.341148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.613 [2024-12-13 04:29:07.381317] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.613 [2024-12-13 04:29:07.456679] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:07.613 [2024-12-13 04:29:07.456801] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:08.183 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:08.183 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:13:08.183 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:08.183 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:08.183 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.183 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.183 BaseBdev1_malloc 00:13:08.183 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.183 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:08.183 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.183 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.183 [2024-12-13 04:29:08.041108] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:08.183 [2024-12-13 04:29:08.041178] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.183 [2024-12-13 04:29:08.041214] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:13:08.183 [2024-12-13 04:29:08.041226] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.183 [2024-12-13 04:29:08.043597] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.183 [2024-12-13 04:29:08.043702] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:08.183 BaseBdev1 00:13:08.183 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.183 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:08.183 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:08.183 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.183 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.183 BaseBdev2_malloc 00:13:08.183 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.183 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:08.183 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.183 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.183 [2024-12-13 04:29:08.075421] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:08.183 [2024-12-13 04:29:08.075554] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.183 [2024-12-13 04:29:08.075584] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:08.183 [2024-12-13 04:29:08.075594] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.183 [2024-12-13 04:29:08.077987] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.183 [2024-12-13 04:29:08.078027] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:08.183 BaseBdev2 00:13:08.183 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.183 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:08.183 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:08.183 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.183 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.183 BaseBdev3_malloc 00:13:08.183 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.183 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:08.183 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.183 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.183 [2024-12-13 04:29:08.110165] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:08.183 [2024-12-13 04:29:08.110304] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.184 [2024-12-13 04:29:08.110338] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:08.184 [2024-12-13 04:29:08.110348] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.184 [2024-12-13 04:29:08.112702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.184 [2024-12-13 04:29:08.112735] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:08.184 BaseBdev3 00:13:08.184 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.184 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:08.184 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:08.184 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.184 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.184 BaseBdev4_malloc 00:13:08.184 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.184 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:08.184 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.184 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.184 [2024-12-13 04:29:08.163783] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:08.184 [2024-12-13 04:29:08.163847] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.184 [2024-12-13 04:29:08.163879] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:08.184 [2024-12-13 04:29:08.163892] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.184 [2024-12-13 04:29:08.167114] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.184 [2024-12-13 04:29:08.167159] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:08.184 BaseBdev4 00:13:08.184 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.184 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:08.184 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.184 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.184 spare_malloc 00:13:08.184 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.184 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:08.184 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.184 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.444 spare_delay 00:13:08.444 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.444 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:08.444 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.444 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.444 [2024-12-13 04:29:08.210139] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:08.444 [2024-12-13 04:29:08.210255] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.444 [2024-12-13 04:29:08.210279] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:08.444 [2024-12-13 04:29:08.210287] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.444 [2024-12-13 04:29:08.212800] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.444 [2024-12-13 04:29:08.212836] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:08.444 spare 00:13:08.444 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.444 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:08.444 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.444 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.444 [2024-12-13 04:29:08.222205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:08.444 [2024-12-13 04:29:08.224303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:08.444 [2024-12-13 04:29:08.224366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:08.444 [2024-12-13 04:29:08.224412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:08.444 [2024-12-13 04:29:08.224509] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:13:08.444 [2024-12-13 04:29:08.224518] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:08.444 [2024-12-13 04:29:08.224805] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:13:08.444 [2024-12-13 04:29:08.224941] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:13:08.444 [2024-12-13 04:29:08.224959] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:13:08.445 [2024-12-13 04:29:08.225093] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:08.445 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.445 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:08.445 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:08.445 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:08.445 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:08.445 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:08.445 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:08.445 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.445 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.445 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.445 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.445 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.445 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.445 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.445 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.445 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.445 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.445 "name": "raid_bdev1", 00:13:08.445 "uuid": "752f3efe-27be-4633-b99b-5b591f895031", 00:13:08.445 "strip_size_kb": 0, 00:13:08.445 "state": "online", 00:13:08.445 "raid_level": "raid1", 00:13:08.445 "superblock": false, 00:13:08.445 "num_base_bdevs": 4, 00:13:08.445 "num_base_bdevs_discovered": 4, 00:13:08.445 "num_base_bdevs_operational": 4, 00:13:08.445 "base_bdevs_list": [ 00:13:08.445 { 00:13:08.445 "name": "BaseBdev1", 00:13:08.445 "uuid": "64fa45ed-ca57-53d7-b0ef-e2e67ca59d88", 00:13:08.445 "is_configured": true, 00:13:08.445 "data_offset": 0, 00:13:08.445 "data_size": 65536 00:13:08.445 }, 00:13:08.445 { 00:13:08.445 "name": "BaseBdev2", 00:13:08.445 "uuid": "fad88a99-5497-53d2-9fcb-ea734e25cff7", 00:13:08.445 "is_configured": true, 00:13:08.445 "data_offset": 0, 00:13:08.445 "data_size": 65536 00:13:08.445 }, 00:13:08.445 { 00:13:08.445 "name": "BaseBdev3", 00:13:08.445 "uuid": "e726cb01-6a35-5ea6-8b77-e4ecb0f11c04", 00:13:08.445 "is_configured": true, 00:13:08.445 "data_offset": 0, 00:13:08.445 "data_size": 65536 00:13:08.445 }, 00:13:08.445 { 00:13:08.445 "name": "BaseBdev4", 00:13:08.445 "uuid": "0e745970-a3c4-5a26-8c0c-ab3934c4be32", 00:13:08.445 "is_configured": true, 00:13:08.445 "data_offset": 0, 00:13:08.445 "data_size": 65536 00:13:08.445 } 00:13:08.445 ] 00:13:08.445 }' 00:13:08.445 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.445 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.704 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:08.705 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:08.705 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.705 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.705 [2024-12-13 04:29:08.677711] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:08.705 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.705 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:08.964 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.964 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:08.964 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.964 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.964 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.964 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:08.964 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:08.964 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:08.964 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:08.964 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.964 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.964 [2024-12-13 04:29:08.777228] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:08.964 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.964 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:08.964 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:08.964 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:08.964 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:08.964 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:08.964 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:08.964 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.964 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.964 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.964 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.964 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.964 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.964 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.964 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.964 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.964 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.964 "name": "raid_bdev1", 00:13:08.965 "uuid": "752f3efe-27be-4633-b99b-5b591f895031", 00:13:08.965 "strip_size_kb": 0, 00:13:08.965 "state": "online", 00:13:08.965 "raid_level": "raid1", 00:13:08.965 "superblock": false, 00:13:08.965 "num_base_bdevs": 4, 00:13:08.965 "num_base_bdevs_discovered": 3, 00:13:08.965 "num_base_bdevs_operational": 3, 00:13:08.965 "base_bdevs_list": [ 00:13:08.965 { 00:13:08.965 "name": null, 00:13:08.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.965 "is_configured": false, 00:13:08.965 "data_offset": 0, 00:13:08.965 "data_size": 65536 00:13:08.965 }, 00:13:08.965 { 00:13:08.965 "name": "BaseBdev2", 00:13:08.965 "uuid": "fad88a99-5497-53d2-9fcb-ea734e25cff7", 00:13:08.965 "is_configured": true, 00:13:08.965 "data_offset": 0, 00:13:08.965 "data_size": 65536 00:13:08.965 }, 00:13:08.965 { 00:13:08.965 "name": "BaseBdev3", 00:13:08.965 "uuid": "e726cb01-6a35-5ea6-8b77-e4ecb0f11c04", 00:13:08.965 "is_configured": true, 00:13:08.965 "data_offset": 0, 00:13:08.965 "data_size": 65536 00:13:08.965 }, 00:13:08.965 { 00:13:08.965 "name": "BaseBdev4", 00:13:08.965 "uuid": "0e745970-a3c4-5a26-8c0c-ab3934c4be32", 00:13:08.965 "is_configured": true, 00:13:08.965 "data_offset": 0, 00:13:08.965 "data_size": 65536 00:13:08.965 } 00:13:08.965 ] 00:13:08.965 }' 00:13:08.965 04:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.965 04:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.965 [2024-12-13 04:29:08.868392] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:13:08.965 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:08.965 Zero copy mechanism will not be used. 00:13:08.965 Running I/O for 60 seconds... 00:13:09.225 04:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:09.225 04:29:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.225 04:29:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.225 [2024-12-13 04:29:09.203489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:09.225 04:29:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.225 04:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:09.485 [2024-12-13 04:29:09.248355] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:13:09.485 [2024-12-13 04:29:09.250696] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:09.485 [2024-12-13 04:29:09.375667] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:09.485 [2024-12-13 04:29:09.378029] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:09.744 [2024-12-13 04:29:09.604631] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:09.744 [2024-12-13 04:29:09.605789] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:10.004 164.00 IOPS, 492.00 MiB/s [2024-12-13T04:29:10.019Z] [2024-12-13 04:29:09.959837] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:10.264 [2024-12-13 04:29:10.196210] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:10.264 04:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:10.264 04:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:10.264 04:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:10.264 04:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:10.264 04:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:10.264 04:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.264 04:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.264 04:29:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.264 04:29:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.264 04:29:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.523 04:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:10.523 "name": "raid_bdev1", 00:13:10.523 "uuid": "752f3efe-27be-4633-b99b-5b591f895031", 00:13:10.523 "strip_size_kb": 0, 00:13:10.523 "state": "online", 00:13:10.523 "raid_level": "raid1", 00:13:10.523 "superblock": false, 00:13:10.523 "num_base_bdevs": 4, 00:13:10.523 "num_base_bdevs_discovered": 4, 00:13:10.523 "num_base_bdevs_operational": 4, 00:13:10.523 "process": { 00:13:10.523 "type": "rebuild", 00:13:10.523 "target": "spare", 00:13:10.523 "progress": { 00:13:10.523 "blocks": 10240, 00:13:10.523 "percent": 15 00:13:10.523 } 00:13:10.523 }, 00:13:10.523 "base_bdevs_list": [ 00:13:10.523 { 00:13:10.523 "name": "spare", 00:13:10.523 "uuid": "44f369fa-c802-5386-bb21-7dddc506662f", 00:13:10.523 "is_configured": true, 00:13:10.523 "data_offset": 0, 00:13:10.523 "data_size": 65536 00:13:10.523 }, 00:13:10.523 { 00:13:10.523 "name": "BaseBdev2", 00:13:10.523 "uuid": "fad88a99-5497-53d2-9fcb-ea734e25cff7", 00:13:10.523 "is_configured": true, 00:13:10.523 "data_offset": 0, 00:13:10.523 "data_size": 65536 00:13:10.523 }, 00:13:10.523 { 00:13:10.523 "name": "BaseBdev3", 00:13:10.523 "uuid": "e726cb01-6a35-5ea6-8b77-e4ecb0f11c04", 00:13:10.523 "is_configured": true, 00:13:10.523 "data_offset": 0, 00:13:10.523 "data_size": 65536 00:13:10.523 }, 00:13:10.524 { 00:13:10.524 "name": "BaseBdev4", 00:13:10.524 "uuid": "0e745970-a3c4-5a26-8c0c-ab3934c4be32", 00:13:10.524 "is_configured": true, 00:13:10.524 "data_offset": 0, 00:13:10.524 "data_size": 65536 00:13:10.524 } 00:13:10.524 ] 00:13:10.524 }' 00:13:10.524 04:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:10.524 04:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:10.524 04:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:10.524 04:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:10.524 04:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:10.524 04:29:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.524 04:29:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.524 [2024-12-13 04:29:10.392729] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:10.784 [2024-12-13 04:29:10.566614] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:10.784 [2024-12-13 04:29:10.577329] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:10.784 [2024-12-13 04:29:10.577384] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:10.784 [2024-12-13 04:29:10.577399] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:10.784 [2024-12-13 04:29:10.599202] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002870 00:13:10.784 04:29:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.784 04:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:10.784 04:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:10.784 04:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:10.784 04:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:10.784 04:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:10.784 04:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:10.784 04:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.784 04:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.784 04:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.784 04:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.784 04:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.784 04:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.784 04:29:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.784 04:29:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.784 04:29:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.784 04:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.784 "name": "raid_bdev1", 00:13:10.784 "uuid": "752f3efe-27be-4633-b99b-5b591f895031", 00:13:10.784 "strip_size_kb": 0, 00:13:10.784 "state": "online", 00:13:10.784 "raid_level": "raid1", 00:13:10.784 "superblock": false, 00:13:10.784 "num_base_bdevs": 4, 00:13:10.784 "num_base_bdevs_discovered": 3, 00:13:10.784 "num_base_bdevs_operational": 3, 00:13:10.784 "base_bdevs_list": [ 00:13:10.784 { 00:13:10.784 "name": null, 00:13:10.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.784 "is_configured": false, 00:13:10.784 "data_offset": 0, 00:13:10.784 "data_size": 65536 00:13:10.784 }, 00:13:10.784 { 00:13:10.784 "name": "BaseBdev2", 00:13:10.784 "uuid": "fad88a99-5497-53d2-9fcb-ea734e25cff7", 00:13:10.784 "is_configured": true, 00:13:10.784 "data_offset": 0, 00:13:10.784 "data_size": 65536 00:13:10.784 }, 00:13:10.784 { 00:13:10.784 "name": "BaseBdev3", 00:13:10.784 "uuid": "e726cb01-6a35-5ea6-8b77-e4ecb0f11c04", 00:13:10.784 "is_configured": true, 00:13:10.784 "data_offset": 0, 00:13:10.784 "data_size": 65536 00:13:10.784 }, 00:13:10.784 { 00:13:10.784 "name": "BaseBdev4", 00:13:10.784 "uuid": "0e745970-a3c4-5a26-8c0c-ab3934c4be32", 00:13:10.784 "is_configured": true, 00:13:10.784 "data_offset": 0, 00:13:10.784 "data_size": 65536 00:13:10.784 } 00:13:10.784 ] 00:13:10.784 }' 00:13:10.784 04:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.784 04:29:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.303 137.00 IOPS, 411.00 MiB/s [2024-12-13T04:29:11.318Z] 04:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:11.303 04:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:11.303 04:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:11.303 04:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:11.303 04:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:11.303 04:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.303 04:29:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.303 04:29:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.303 04:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.303 04:29:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.303 04:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:11.303 "name": "raid_bdev1", 00:13:11.303 "uuid": "752f3efe-27be-4633-b99b-5b591f895031", 00:13:11.303 "strip_size_kb": 0, 00:13:11.303 "state": "online", 00:13:11.303 "raid_level": "raid1", 00:13:11.303 "superblock": false, 00:13:11.303 "num_base_bdevs": 4, 00:13:11.303 "num_base_bdevs_discovered": 3, 00:13:11.303 "num_base_bdevs_operational": 3, 00:13:11.303 "base_bdevs_list": [ 00:13:11.303 { 00:13:11.303 "name": null, 00:13:11.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.303 "is_configured": false, 00:13:11.303 "data_offset": 0, 00:13:11.303 "data_size": 65536 00:13:11.303 }, 00:13:11.303 { 00:13:11.303 "name": "BaseBdev2", 00:13:11.303 "uuid": "fad88a99-5497-53d2-9fcb-ea734e25cff7", 00:13:11.303 "is_configured": true, 00:13:11.303 "data_offset": 0, 00:13:11.304 "data_size": 65536 00:13:11.304 }, 00:13:11.304 { 00:13:11.304 "name": "BaseBdev3", 00:13:11.304 "uuid": "e726cb01-6a35-5ea6-8b77-e4ecb0f11c04", 00:13:11.304 "is_configured": true, 00:13:11.304 "data_offset": 0, 00:13:11.304 "data_size": 65536 00:13:11.304 }, 00:13:11.304 { 00:13:11.304 "name": "BaseBdev4", 00:13:11.304 "uuid": "0e745970-a3c4-5a26-8c0c-ab3934c4be32", 00:13:11.304 "is_configured": true, 00:13:11.304 "data_offset": 0, 00:13:11.304 "data_size": 65536 00:13:11.304 } 00:13:11.304 ] 00:13:11.304 }' 00:13:11.304 04:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:11.304 04:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:11.304 04:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:11.304 04:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:11.304 04:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:11.304 04:29:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.304 04:29:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.304 [2024-12-13 04:29:11.270489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:11.304 04:29:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.304 04:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:11.564 [2024-12-13 04:29:11.331559] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:13:11.564 [2024-12-13 04:29:11.333879] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:11.564 [2024-12-13 04:29:11.463275] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:11.564 [2024-12-13 04:29:11.465571] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:12.133 173.67 IOPS, 521.00 MiB/s [2024-12-13T04:29:12.148Z] [2024-12-13 04:29:11.944707] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:12.133 [2024-12-13 04:29:11.945551] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:12.133 [2024-12-13 04:29:12.056532] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:12.393 04:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:12.393 04:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:12.393 04:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:12.393 04:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:12.393 04:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:12.393 04:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.393 04:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.393 04:29:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.393 04:29:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.393 04:29:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.393 04:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:12.393 "name": "raid_bdev1", 00:13:12.393 "uuid": "752f3efe-27be-4633-b99b-5b591f895031", 00:13:12.393 "strip_size_kb": 0, 00:13:12.393 "state": "online", 00:13:12.393 "raid_level": "raid1", 00:13:12.393 "superblock": false, 00:13:12.393 "num_base_bdevs": 4, 00:13:12.393 "num_base_bdevs_discovered": 4, 00:13:12.393 "num_base_bdevs_operational": 4, 00:13:12.393 "process": { 00:13:12.393 "type": "rebuild", 00:13:12.393 "target": "spare", 00:13:12.393 "progress": { 00:13:12.393 "blocks": 14336, 00:13:12.393 "percent": 21 00:13:12.393 } 00:13:12.393 }, 00:13:12.393 "base_bdevs_list": [ 00:13:12.393 { 00:13:12.393 "name": "spare", 00:13:12.393 "uuid": "44f369fa-c802-5386-bb21-7dddc506662f", 00:13:12.393 "is_configured": true, 00:13:12.393 "data_offset": 0, 00:13:12.393 "data_size": 65536 00:13:12.393 }, 00:13:12.393 { 00:13:12.393 "name": "BaseBdev2", 00:13:12.393 "uuid": "fad88a99-5497-53d2-9fcb-ea734e25cff7", 00:13:12.393 "is_configured": true, 00:13:12.393 "data_offset": 0, 00:13:12.393 "data_size": 65536 00:13:12.393 }, 00:13:12.393 { 00:13:12.393 "name": "BaseBdev3", 00:13:12.393 "uuid": "e726cb01-6a35-5ea6-8b77-e4ecb0f11c04", 00:13:12.393 "is_configured": true, 00:13:12.393 "data_offset": 0, 00:13:12.393 "data_size": 65536 00:13:12.393 }, 00:13:12.393 { 00:13:12.393 "name": "BaseBdev4", 00:13:12.393 "uuid": "0e745970-a3c4-5a26-8c0c-ab3934c4be32", 00:13:12.393 "is_configured": true, 00:13:12.393 "data_offset": 0, 00:13:12.393 "data_size": 65536 00:13:12.393 } 00:13:12.393 ] 00:13:12.393 }' 00:13:12.393 04:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:12.393 04:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:12.393 04:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:12.653 [2024-12-13 04:29:12.424233] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:12.653 04:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:12.653 04:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:12.653 04:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:12.653 04:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:12.653 04:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:12.653 04:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:12.653 04:29:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.653 04:29:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.653 [2024-12-13 04:29:12.463960] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:12.913 [2024-12-13 04:29:12.752717] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002870 00:13:12.913 [2024-12-13 04:29:12.752769] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002a10 00:13:12.913 04:29:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.913 04:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:12.913 04:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:12.913 04:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:12.913 04:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:12.913 04:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:12.913 04:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:12.913 04:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:12.913 04:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.913 04:29:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.913 04:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.913 04:29:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.913 04:29:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.913 04:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:12.913 "name": "raid_bdev1", 00:13:12.913 "uuid": "752f3efe-27be-4633-b99b-5b591f895031", 00:13:12.913 "strip_size_kb": 0, 00:13:12.913 "state": "online", 00:13:12.913 "raid_level": "raid1", 00:13:12.913 "superblock": false, 00:13:12.913 "num_base_bdevs": 4, 00:13:12.913 "num_base_bdevs_discovered": 3, 00:13:12.913 "num_base_bdevs_operational": 3, 00:13:12.913 "process": { 00:13:12.913 "type": "rebuild", 00:13:12.913 "target": "spare", 00:13:12.913 "progress": { 00:13:12.913 "blocks": 18432, 00:13:12.913 "percent": 28 00:13:12.913 } 00:13:12.913 }, 00:13:12.913 "base_bdevs_list": [ 00:13:12.913 { 00:13:12.913 "name": "spare", 00:13:12.913 "uuid": "44f369fa-c802-5386-bb21-7dddc506662f", 00:13:12.913 "is_configured": true, 00:13:12.913 "data_offset": 0, 00:13:12.913 "data_size": 65536 00:13:12.913 }, 00:13:12.913 { 00:13:12.913 "name": null, 00:13:12.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.914 "is_configured": false, 00:13:12.914 "data_offset": 0, 00:13:12.914 "data_size": 65536 00:13:12.914 }, 00:13:12.914 { 00:13:12.914 "name": "BaseBdev3", 00:13:12.914 "uuid": "e726cb01-6a35-5ea6-8b77-e4ecb0f11c04", 00:13:12.914 "is_configured": true, 00:13:12.914 "data_offset": 0, 00:13:12.914 "data_size": 65536 00:13:12.914 }, 00:13:12.914 { 00:13:12.914 "name": "BaseBdev4", 00:13:12.914 "uuid": "0e745970-a3c4-5a26-8c0c-ab3934c4be32", 00:13:12.914 "is_configured": true, 00:13:12.914 "data_offset": 0, 00:13:12.914 "data_size": 65536 00:13:12.914 } 00:13:12.914 ] 00:13:12.914 }' 00:13:12.914 04:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:12.914 04:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:12.914 04:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:12.914 149.25 IOPS, 447.75 MiB/s [2024-12-13T04:29:12.929Z] 04:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:12.914 04:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=401 00:13:12.914 04:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:12.914 04:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:12.914 04:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:12.914 04:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:12.914 04:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:12.914 04:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:12.914 04:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.914 04:29:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.914 04:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.914 04:29:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.174 04:29:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.174 04:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:13.174 "name": "raid_bdev1", 00:13:13.174 "uuid": "752f3efe-27be-4633-b99b-5b591f895031", 00:13:13.174 "strip_size_kb": 0, 00:13:13.174 "state": "online", 00:13:13.174 "raid_level": "raid1", 00:13:13.174 "superblock": false, 00:13:13.174 "num_base_bdevs": 4, 00:13:13.174 "num_base_bdevs_discovered": 3, 00:13:13.174 "num_base_bdevs_operational": 3, 00:13:13.174 "process": { 00:13:13.174 "type": "rebuild", 00:13:13.174 "target": "spare", 00:13:13.174 "progress": { 00:13:13.174 "blocks": 20480, 00:13:13.174 "percent": 31 00:13:13.174 } 00:13:13.174 }, 00:13:13.174 "base_bdevs_list": [ 00:13:13.174 { 00:13:13.174 "name": "spare", 00:13:13.174 "uuid": "44f369fa-c802-5386-bb21-7dddc506662f", 00:13:13.174 "is_configured": true, 00:13:13.174 "data_offset": 0, 00:13:13.174 "data_size": 65536 00:13:13.174 }, 00:13:13.174 { 00:13:13.174 "name": null, 00:13:13.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.174 "is_configured": false, 00:13:13.174 "data_offset": 0, 00:13:13.174 "data_size": 65536 00:13:13.174 }, 00:13:13.174 { 00:13:13.174 "name": "BaseBdev3", 00:13:13.174 "uuid": "e726cb01-6a35-5ea6-8b77-e4ecb0f11c04", 00:13:13.174 "is_configured": true, 00:13:13.174 "data_offset": 0, 00:13:13.174 "data_size": 65536 00:13:13.174 }, 00:13:13.174 { 00:13:13.174 "name": "BaseBdev4", 00:13:13.174 "uuid": "0e745970-a3c4-5a26-8c0c-ab3934c4be32", 00:13:13.174 "is_configured": true, 00:13:13.174 "data_offset": 0, 00:13:13.174 "data_size": 65536 00:13:13.174 } 00:13:13.174 ] 00:13:13.174 }' 00:13:13.174 04:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:13.174 04:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:13.174 04:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:13.174 04:29:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:13.174 04:29:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:13.434 [2024-12-13 04:29:13.245544] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:13.694 [2024-12-13 04:29:13.451496] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:13.694 [2024-12-13 04:29:13.452102] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:13.954 [2024-12-13 04:29:13.763275] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:13.954 [2024-12-13 04:29:13.764689] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:14.214 127.00 IOPS, 381.00 MiB/s [2024-12-13T04:29:14.229Z] 04:29:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:14.214 04:29:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:14.214 04:29:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:14.214 04:29:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:14.214 04:29:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:14.214 04:29:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:14.214 04:29:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.214 04:29:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.214 04:29:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.214 04:29:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.214 04:29:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.214 04:29:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:14.214 "name": "raid_bdev1", 00:13:14.214 "uuid": "752f3efe-27be-4633-b99b-5b591f895031", 00:13:14.214 "strip_size_kb": 0, 00:13:14.214 "state": "online", 00:13:14.214 "raid_level": "raid1", 00:13:14.214 "superblock": false, 00:13:14.214 "num_base_bdevs": 4, 00:13:14.214 "num_base_bdevs_discovered": 3, 00:13:14.214 "num_base_bdevs_operational": 3, 00:13:14.214 "process": { 00:13:14.214 "type": "rebuild", 00:13:14.214 "target": "spare", 00:13:14.214 "progress": { 00:13:14.214 "blocks": 34816, 00:13:14.214 "percent": 53 00:13:14.214 } 00:13:14.214 }, 00:13:14.214 "base_bdevs_list": [ 00:13:14.214 { 00:13:14.214 "name": "spare", 00:13:14.214 "uuid": "44f369fa-c802-5386-bb21-7dddc506662f", 00:13:14.214 "is_configured": true, 00:13:14.214 "data_offset": 0, 00:13:14.214 "data_size": 65536 00:13:14.214 }, 00:13:14.214 { 00:13:14.214 "name": null, 00:13:14.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.214 "is_configured": false, 00:13:14.214 "data_offset": 0, 00:13:14.214 "data_size": 65536 00:13:14.214 }, 00:13:14.214 { 00:13:14.214 "name": "BaseBdev3", 00:13:14.214 "uuid": "e726cb01-6a35-5ea6-8b77-e4ecb0f11c04", 00:13:14.214 "is_configured": true, 00:13:14.214 "data_offset": 0, 00:13:14.214 "data_size": 65536 00:13:14.214 }, 00:13:14.214 { 00:13:14.214 "name": "BaseBdev4", 00:13:14.214 "uuid": "0e745970-a3c4-5a26-8c0c-ab3934c4be32", 00:13:14.214 "is_configured": true, 00:13:14.214 "data_offset": 0, 00:13:14.214 "data_size": 65536 00:13:14.214 } 00:13:14.214 ] 00:13:14.214 }' 00:13:14.214 04:29:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:14.214 04:29:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:14.214 04:29:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:14.214 04:29:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:14.214 04:29:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:14.214 [2024-12-13 04:29:14.195622] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:14.214 [2024-12-13 04:29:14.196084] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:14.474 [2024-12-13 04:29:14.413708] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:15.303 111.67 IOPS, 335.00 MiB/s [2024-12-13T04:29:15.318Z] 04:29:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:15.303 04:29:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:15.303 04:29:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:15.303 04:29:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:15.303 04:29:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:15.303 04:29:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:15.303 04:29:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.303 04:29:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.303 04:29:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.303 04:29:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.303 04:29:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.303 04:29:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:15.303 "name": "raid_bdev1", 00:13:15.303 "uuid": "752f3efe-27be-4633-b99b-5b591f895031", 00:13:15.303 "strip_size_kb": 0, 00:13:15.303 "state": "online", 00:13:15.303 "raid_level": "raid1", 00:13:15.303 "superblock": false, 00:13:15.303 "num_base_bdevs": 4, 00:13:15.303 "num_base_bdevs_discovered": 3, 00:13:15.303 "num_base_bdevs_operational": 3, 00:13:15.303 "process": { 00:13:15.303 "type": "rebuild", 00:13:15.303 "target": "spare", 00:13:15.303 "progress": { 00:13:15.303 "blocks": 53248, 00:13:15.303 "percent": 81 00:13:15.303 } 00:13:15.303 }, 00:13:15.303 "base_bdevs_list": [ 00:13:15.303 { 00:13:15.303 "name": "spare", 00:13:15.303 "uuid": "44f369fa-c802-5386-bb21-7dddc506662f", 00:13:15.303 "is_configured": true, 00:13:15.303 "data_offset": 0, 00:13:15.303 "data_size": 65536 00:13:15.303 }, 00:13:15.303 { 00:13:15.303 "name": null, 00:13:15.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.303 "is_configured": false, 00:13:15.303 "data_offset": 0, 00:13:15.303 "data_size": 65536 00:13:15.303 }, 00:13:15.303 { 00:13:15.303 "name": "BaseBdev3", 00:13:15.303 "uuid": "e726cb01-6a35-5ea6-8b77-e4ecb0f11c04", 00:13:15.303 "is_configured": true, 00:13:15.303 "data_offset": 0, 00:13:15.303 "data_size": 65536 00:13:15.303 }, 00:13:15.303 { 00:13:15.303 "name": "BaseBdev4", 00:13:15.303 "uuid": "0e745970-a3c4-5a26-8c0c-ab3934c4be32", 00:13:15.303 "is_configured": true, 00:13:15.303 "data_offset": 0, 00:13:15.303 "data_size": 65536 00:13:15.303 } 00:13:15.303 ] 00:13:15.303 }' 00:13:15.303 04:29:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:15.303 04:29:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:15.303 04:29:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:15.303 [2024-12-13 04:29:15.315045] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:15.563 04:29:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:15.563 04:29:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:15.563 [2024-12-13 04:29:15.537065] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:13:16.131 101.29 IOPS, 303.86 MiB/s [2024-12-13T04:29:16.146Z] [2024-12-13 04:29:15.973529] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:16.131 [2024-12-13 04:29:16.073293] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:16.131 [2024-12-13 04:29:16.076845] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:16.391 04:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:16.391 04:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:16.391 04:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:16.391 04:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:16.391 04:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:16.391 04:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:16.391 04:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.391 04:29:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.391 04:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.391 04:29:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.391 04:29:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.391 04:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:16.391 "name": "raid_bdev1", 00:13:16.391 "uuid": "752f3efe-27be-4633-b99b-5b591f895031", 00:13:16.391 "strip_size_kb": 0, 00:13:16.391 "state": "online", 00:13:16.391 "raid_level": "raid1", 00:13:16.391 "superblock": false, 00:13:16.391 "num_base_bdevs": 4, 00:13:16.391 "num_base_bdevs_discovered": 3, 00:13:16.391 "num_base_bdevs_operational": 3, 00:13:16.391 "base_bdevs_list": [ 00:13:16.391 { 00:13:16.391 "name": "spare", 00:13:16.391 "uuid": "44f369fa-c802-5386-bb21-7dddc506662f", 00:13:16.391 "is_configured": true, 00:13:16.391 "data_offset": 0, 00:13:16.391 "data_size": 65536 00:13:16.391 }, 00:13:16.391 { 00:13:16.391 "name": null, 00:13:16.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.391 "is_configured": false, 00:13:16.391 "data_offset": 0, 00:13:16.391 "data_size": 65536 00:13:16.391 }, 00:13:16.391 { 00:13:16.391 "name": "BaseBdev3", 00:13:16.391 "uuid": "e726cb01-6a35-5ea6-8b77-e4ecb0f11c04", 00:13:16.391 "is_configured": true, 00:13:16.391 "data_offset": 0, 00:13:16.391 "data_size": 65536 00:13:16.391 }, 00:13:16.391 { 00:13:16.391 "name": "BaseBdev4", 00:13:16.391 "uuid": "0e745970-a3c4-5a26-8c0c-ab3934c4be32", 00:13:16.391 "is_configured": true, 00:13:16.391 "data_offset": 0, 00:13:16.391 "data_size": 65536 00:13:16.391 } 00:13:16.391 ] 00:13:16.391 }' 00:13:16.391 04:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:16.651 04:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:16.651 04:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:16.651 04:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:16.651 04:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:16.651 04:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:16.651 04:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:16.651 04:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:16.651 04:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:16.651 04:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:16.651 04:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.651 04:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.651 04:29:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.651 04:29:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.651 04:29:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.651 04:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:16.651 "name": "raid_bdev1", 00:13:16.651 "uuid": "752f3efe-27be-4633-b99b-5b591f895031", 00:13:16.651 "strip_size_kb": 0, 00:13:16.651 "state": "online", 00:13:16.651 "raid_level": "raid1", 00:13:16.651 "superblock": false, 00:13:16.651 "num_base_bdevs": 4, 00:13:16.651 "num_base_bdevs_discovered": 3, 00:13:16.651 "num_base_bdevs_operational": 3, 00:13:16.651 "base_bdevs_list": [ 00:13:16.651 { 00:13:16.651 "name": "spare", 00:13:16.651 "uuid": "44f369fa-c802-5386-bb21-7dddc506662f", 00:13:16.651 "is_configured": true, 00:13:16.651 "data_offset": 0, 00:13:16.651 "data_size": 65536 00:13:16.651 }, 00:13:16.651 { 00:13:16.651 "name": null, 00:13:16.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.652 "is_configured": false, 00:13:16.652 "data_offset": 0, 00:13:16.652 "data_size": 65536 00:13:16.652 }, 00:13:16.652 { 00:13:16.652 "name": "BaseBdev3", 00:13:16.652 "uuid": "e726cb01-6a35-5ea6-8b77-e4ecb0f11c04", 00:13:16.652 "is_configured": true, 00:13:16.652 "data_offset": 0, 00:13:16.652 "data_size": 65536 00:13:16.652 }, 00:13:16.652 { 00:13:16.652 "name": "BaseBdev4", 00:13:16.652 "uuid": "0e745970-a3c4-5a26-8c0c-ab3934c4be32", 00:13:16.652 "is_configured": true, 00:13:16.652 "data_offset": 0, 00:13:16.652 "data_size": 65536 00:13:16.652 } 00:13:16.652 ] 00:13:16.652 }' 00:13:16.652 04:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:16.652 04:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:16.652 04:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:16.652 04:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:16.652 04:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:16.652 04:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.652 04:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.652 04:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:16.652 04:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:16.652 04:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:16.652 04:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.652 04:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.652 04:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.652 04:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.652 04:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.652 04:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.652 04:29:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.652 04:29:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.652 04:29:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.652 04:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.652 "name": "raid_bdev1", 00:13:16.652 "uuid": "752f3efe-27be-4633-b99b-5b591f895031", 00:13:16.652 "strip_size_kb": 0, 00:13:16.652 "state": "online", 00:13:16.652 "raid_level": "raid1", 00:13:16.652 "superblock": false, 00:13:16.652 "num_base_bdevs": 4, 00:13:16.652 "num_base_bdevs_discovered": 3, 00:13:16.652 "num_base_bdevs_operational": 3, 00:13:16.652 "base_bdevs_list": [ 00:13:16.652 { 00:13:16.652 "name": "spare", 00:13:16.652 "uuid": "44f369fa-c802-5386-bb21-7dddc506662f", 00:13:16.652 "is_configured": true, 00:13:16.652 "data_offset": 0, 00:13:16.652 "data_size": 65536 00:13:16.652 }, 00:13:16.652 { 00:13:16.652 "name": null, 00:13:16.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.652 "is_configured": false, 00:13:16.652 "data_offset": 0, 00:13:16.652 "data_size": 65536 00:13:16.652 }, 00:13:16.652 { 00:13:16.652 "name": "BaseBdev3", 00:13:16.652 "uuid": "e726cb01-6a35-5ea6-8b77-e4ecb0f11c04", 00:13:16.652 "is_configured": true, 00:13:16.652 "data_offset": 0, 00:13:16.652 "data_size": 65536 00:13:16.652 }, 00:13:16.652 { 00:13:16.652 "name": "BaseBdev4", 00:13:16.652 "uuid": "0e745970-a3c4-5a26-8c0c-ab3934c4be32", 00:13:16.652 "is_configured": true, 00:13:16.652 "data_offset": 0, 00:13:16.652 "data_size": 65536 00:13:16.652 } 00:13:16.652 ] 00:13:16.652 }' 00:13:16.652 04:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.652 04:29:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.170 92.88 IOPS, 278.62 MiB/s [2024-12-13T04:29:17.185Z] 04:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:17.170 04:29:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.170 04:29:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.170 [2024-12-13 04:29:17.061219] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:17.170 [2024-12-13 04:29:17.061311] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:17.170 00:13:17.170 Latency(us) 00:13:17.170 [2024-12-13T04:29:17.185Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:17.170 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:17.170 raid_bdev1 : 8.31 90.41 271.24 0.00 0.00 16298.14 282.61 119968.08 00:13:17.170 [2024-12-13T04:29:17.185Z] =================================================================================================================== 00:13:17.170 [2024-12-13T04:29:17.185Z] Total : 90.41 271.24 0.00 0.00 16298.14 282.61 119968.08 00:13:17.170 [2024-12-13 04:29:17.164390] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:17.170 { 00:13:17.170 "results": [ 00:13:17.170 { 00:13:17.170 "job": "raid_bdev1", 00:13:17.170 "core_mask": "0x1", 00:13:17.170 "workload": "randrw", 00:13:17.170 "percentage": 50, 00:13:17.170 "status": "finished", 00:13:17.170 "queue_depth": 2, 00:13:17.170 "io_size": 3145728, 00:13:17.170 "runtime": 8.306155, 00:13:17.170 "iops": 90.41487908665322, 00:13:17.170 "mibps": 271.24463725995963, 00:13:17.170 "io_failed": 0, 00:13:17.170 "io_timeout": 0, 00:13:17.170 "avg_latency_us": 16298.1425964798, 00:13:17.170 "min_latency_us": 282.6061135371179, 00:13:17.171 "max_latency_us": 119968.08384279476 00:13:17.171 } 00:13:17.171 ], 00:13:17.171 "core_count": 1 00:13:17.171 } 00:13:17.171 [2024-12-13 04:29:17.164540] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:17.171 [2024-12-13 04:29:17.164651] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:17.171 [2024-12-13 04:29:17.164662] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:13:17.171 04:29:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.171 04:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.171 04:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:17.171 04:29:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.171 04:29:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.171 04:29:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.431 04:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:17.431 04:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:17.431 04:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:17.431 04:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:17.431 04:29:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:17.431 04:29:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:17.431 04:29:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:17.431 04:29:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:17.431 04:29:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:17.431 04:29:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:17.431 04:29:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:17.431 04:29:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:17.431 04:29:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:17.431 /dev/nbd0 00:13:17.431 04:29:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:17.431 04:29:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:17.431 04:29:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:17.431 04:29:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:17.431 04:29:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:17.431 04:29:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:17.431 04:29:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:17.691 04:29:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:17.691 04:29:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:17.691 04:29:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:17.691 04:29:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:17.691 1+0 records in 00:13:17.691 1+0 records out 00:13:17.691 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000415392 s, 9.9 MB/s 00:13:17.691 04:29:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.691 04:29:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:17.691 04:29:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.691 04:29:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:17.691 04:29:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:17.691 04:29:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:17.691 04:29:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:17.691 04:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:17.691 04:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:13:17.691 04:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:13:17.691 04:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:17.691 04:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:13:17.691 04:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:13:17.691 04:29:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:17.691 04:29:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:13:17.691 04:29:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:17.691 04:29:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:17.691 04:29:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:17.691 04:29:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:17.691 04:29:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:17.691 04:29:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:17.691 04:29:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:13:17.691 /dev/nbd1 00:13:17.691 04:29:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:17.691 04:29:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:17.691 04:29:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:17.691 04:29:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:17.691 04:29:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:17.951 04:29:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:17.951 04:29:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:17.951 04:29:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:17.951 04:29:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:17.951 04:29:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:17.951 04:29:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:17.951 1+0 records in 00:13:17.951 1+0 records out 00:13:17.951 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436031 s, 9.4 MB/s 00:13:17.951 04:29:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.951 04:29:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:17.951 04:29:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.951 04:29:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:17.951 04:29:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:17.951 04:29:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:17.951 04:29:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:17.951 04:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:17.951 04:29:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:17.951 04:29:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:17.951 04:29:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:17.951 04:29:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:17.951 04:29:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:17.951 04:29:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:17.951 04:29:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:18.211 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:18.211 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:18.211 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:18.211 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:18.211 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:18.211 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:18.211 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:18.211 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:18.211 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:18.211 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:13:18.211 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:13:18.211 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:18.211 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:13:18.211 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:18.211 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:18.211 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:18.211 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:18.211 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:18.211 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:18.211 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:13:18.211 /dev/nbd1 00:13:18.471 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:18.471 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:18.471 04:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:18.471 04:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:18.471 04:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:18.471 04:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:18.471 04:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:18.471 04:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:18.471 04:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:18.471 04:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:18.471 04:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:18.471 1+0 records in 00:13:18.471 1+0 records out 00:13:18.471 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284508 s, 14.4 MB/s 00:13:18.471 04:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.471 04:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:18.471 04:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.471 04:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:18.471 04:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:18.471 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:18.471 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:18.471 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:18.471 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:18.471 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:18.471 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:18.471 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:18.471 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:18.471 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:18.471 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:18.731 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:18.731 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:18.731 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:18.731 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:18.731 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:18.731 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:18.731 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:18.731 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:18.731 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:18.731 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:18.731 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:18.731 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:18.731 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:18.731 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:18.731 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:18.991 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:18.991 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:18.991 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:18.991 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:18.991 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:18.991 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:18.991 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:18.991 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:18.991 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:18.991 04:29:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 91074 00:13:18.991 04:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 91074 ']' 00:13:18.991 04:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 91074 00:13:18.991 04:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:13:18.991 04:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:18.991 04:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91074 00:13:18.991 04:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:18.991 killing process with pid 91074 00:13:18.991 Received shutdown signal, test time was about 9.966886 seconds 00:13:18.991 00:13:18.991 Latency(us) 00:13:18.991 [2024-12-13T04:29:19.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:18.991 [2024-12-13T04:29:19.006Z] =================================================================================================================== 00:13:18.991 [2024-12-13T04:29:19.006Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:18.991 04:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:18.991 04:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91074' 00:13:18.991 04:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 91074 00:13:18.991 [2024-12-13 04:29:18.818487] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:18.991 04:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 91074 00:13:18.991 [2024-12-13 04:29:18.901983] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:19.250 04:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:19.250 00:13:19.250 real 0m12.137s 00:13:19.250 user 0m15.618s 00:13:19.250 sys 0m1.869s 00:13:19.250 04:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:19.250 04:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.250 ************************************ 00:13:19.250 END TEST raid_rebuild_test_io 00:13:19.250 ************************************ 00:13:19.510 04:29:19 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:13:19.510 04:29:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:19.510 04:29:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:19.510 04:29:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:19.510 ************************************ 00:13:19.510 START TEST raid_rebuild_test_sb_io 00:13:19.510 ************************************ 00:13:19.510 04:29:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:13:19.510 04:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:19.510 04:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:19.510 04:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:19.510 04:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:19.510 04:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:19.510 04:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:19.510 04:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:19.510 04:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:19.510 04:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:19.510 04:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:19.510 04:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:19.510 04:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:19.510 04:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:19.510 04:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:19.510 04:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:19.510 04:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:19.510 04:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:19.510 04:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:19.510 04:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:19.510 04:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:19.510 04:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:19.510 04:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:19.510 04:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:19.510 04:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:19.510 04:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:19.510 04:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:19.510 04:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:19.510 04:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:19.510 04:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:19.510 04:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:19.510 04:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=91472 00:13:19.510 04:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:19.510 04:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 91472 00:13:19.510 04:29:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 91472 ']' 00:13:19.510 04:29:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.510 04:29:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:19.510 04:29:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.511 04:29:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:19.511 04:29:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.511 [2024-12-13 04:29:19.404345] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:13:19.511 [2024-12-13 04:29:19.404586] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:13:19.511 Zero copy mechanism will not be used. 00:13:19.511 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91472 ] 00:13:19.770 [2024-12-13 04:29:19.559835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.770 [2024-12-13 04:29:19.598933] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.770 [2024-12-13 04:29:19.676469] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:19.770 [2024-12-13 04:29:19.676589] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:20.341 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:20.341 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:13:20.341 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:20.341 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:20.341 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.341 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.341 BaseBdev1_malloc 00:13:20.341 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.341 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:20.341 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.341 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.341 [2024-12-13 04:29:20.245870] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:20.341 [2024-12-13 04:29:20.245932] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.341 [2024-12-13 04:29:20.245960] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:13:20.341 [2024-12-13 04:29:20.245973] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.341 [2024-12-13 04:29:20.248351] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.341 [2024-12-13 04:29:20.248399] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:20.341 BaseBdev1 00:13:20.341 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.341 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:20.341 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:20.341 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.341 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.341 BaseBdev2_malloc 00:13:20.341 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.341 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:20.341 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.341 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.341 [2024-12-13 04:29:20.280535] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:20.341 [2024-12-13 04:29:20.280657] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.341 [2024-12-13 04:29:20.280699] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:20.341 [2024-12-13 04:29:20.280727] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.341 [2024-12-13 04:29:20.283078] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.341 [2024-12-13 04:29:20.283149] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:20.341 BaseBdev2 00:13:20.341 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.341 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:20.341 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:20.341 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.341 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.341 BaseBdev3_malloc 00:13:20.341 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.341 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:20.341 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.341 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.341 [2024-12-13 04:29:20.315161] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:20.341 [2024-12-13 04:29:20.315278] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.341 [2024-12-13 04:29:20.315323] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:20.341 [2024-12-13 04:29:20.315358] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.341 [2024-12-13 04:29:20.317806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.341 [2024-12-13 04:29:20.317875] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:20.341 BaseBdev3 00:13:20.341 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.341 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:20.341 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:20.341 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.341 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.602 BaseBdev4_malloc 00:13:20.602 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.602 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:20.602 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.602 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.602 [2024-12-13 04:29:20.366515] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:20.602 [2024-12-13 04:29:20.366652] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.602 [2024-12-13 04:29:20.366718] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:20.602 [2024-12-13 04:29:20.366773] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.602 [2024-12-13 04:29:20.370319] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.602 [2024-12-13 04:29:20.370369] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:20.602 BaseBdev4 00:13:20.602 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.602 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:20.602 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.602 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.602 spare_malloc 00:13:20.602 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.602 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:20.602 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.602 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.602 spare_delay 00:13:20.602 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.602 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:20.602 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.602 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.602 [2024-12-13 04:29:20.414365] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:20.602 [2024-12-13 04:29:20.414467] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.602 [2024-12-13 04:29:20.414520] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:20.602 [2024-12-13 04:29:20.414548] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.602 [2024-12-13 04:29:20.416956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.602 [2024-12-13 04:29:20.417030] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:20.602 spare 00:13:20.602 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.602 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:20.602 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.602 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.602 [2024-12-13 04:29:20.426415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:20.602 [2024-12-13 04:29:20.428554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:20.602 [2024-12-13 04:29:20.428617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:20.602 [2024-12-13 04:29:20.428661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:20.602 [2024-12-13 04:29:20.428846] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:13:20.602 [2024-12-13 04:29:20.428866] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:20.602 [2024-12-13 04:29:20.429112] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:13:20.602 [2024-12-13 04:29:20.429249] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:13:20.602 [2024-12-13 04:29:20.429262] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:13:20.602 [2024-12-13 04:29:20.429378] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:20.602 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.602 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:20.602 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:20.602 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:20.602 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:20.602 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:20.602 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:20.602 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.602 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.603 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.603 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.603 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.603 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.603 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.603 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.603 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.603 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.603 "name": "raid_bdev1", 00:13:20.603 "uuid": "58a24e67-74bf-4378-8c82-82059e299ef0", 00:13:20.603 "strip_size_kb": 0, 00:13:20.603 "state": "online", 00:13:20.603 "raid_level": "raid1", 00:13:20.603 "superblock": true, 00:13:20.603 "num_base_bdevs": 4, 00:13:20.603 "num_base_bdevs_discovered": 4, 00:13:20.603 "num_base_bdevs_operational": 4, 00:13:20.603 "base_bdevs_list": [ 00:13:20.603 { 00:13:20.603 "name": "BaseBdev1", 00:13:20.603 "uuid": "d3ff5c3f-b17f-5492-b1c2-5730e8492dbd", 00:13:20.603 "is_configured": true, 00:13:20.603 "data_offset": 2048, 00:13:20.603 "data_size": 63488 00:13:20.603 }, 00:13:20.603 { 00:13:20.603 "name": "BaseBdev2", 00:13:20.603 "uuid": "737eba1e-3c56-551d-8707-87353cc2160b", 00:13:20.603 "is_configured": true, 00:13:20.603 "data_offset": 2048, 00:13:20.603 "data_size": 63488 00:13:20.603 }, 00:13:20.603 { 00:13:20.603 "name": "BaseBdev3", 00:13:20.603 "uuid": "e72919fe-7b68-56b6-8b54-5172ad249e01", 00:13:20.603 "is_configured": true, 00:13:20.603 "data_offset": 2048, 00:13:20.603 "data_size": 63488 00:13:20.603 }, 00:13:20.603 { 00:13:20.603 "name": "BaseBdev4", 00:13:20.603 "uuid": "ac17915d-a0fb-5635-b615-a1ca670201cc", 00:13:20.603 "is_configured": true, 00:13:20.603 "data_offset": 2048, 00:13:20.603 "data_size": 63488 00:13:20.603 } 00:13:20.603 ] 00:13:20.603 }' 00:13:20.603 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.603 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.862 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:20.862 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:20.862 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.862 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.122 [2024-12-13 04:29:20.877872] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:21.122 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.122 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:21.122 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:21.122 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.122 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.122 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.122 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.122 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:21.122 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:21.122 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:21.122 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:21.122 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.122 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.122 [2024-12-13 04:29:20.973410] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:21.122 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.122 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:21.122 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:21.122 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:21.122 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:21.122 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:21.122 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:21.122 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.122 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.122 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.122 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.122 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.122 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.122 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.122 04:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.122 04:29:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.122 04:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.122 "name": "raid_bdev1", 00:13:21.122 "uuid": "58a24e67-74bf-4378-8c82-82059e299ef0", 00:13:21.122 "strip_size_kb": 0, 00:13:21.122 "state": "online", 00:13:21.122 "raid_level": "raid1", 00:13:21.122 "superblock": true, 00:13:21.122 "num_base_bdevs": 4, 00:13:21.122 "num_base_bdevs_discovered": 3, 00:13:21.122 "num_base_bdevs_operational": 3, 00:13:21.122 "base_bdevs_list": [ 00:13:21.122 { 00:13:21.122 "name": null, 00:13:21.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.122 "is_configured": false, 00:13:21.122 "data_offset": 0, 00:13:21.122 "data_size": 63488 00:13:21.122 }, 00:13:21.122 { 00:13:21.122 "name": "BaseBdev2", 00:13:21.122 "uuid": "737eba1e-3c56-551d-8707-87353cc2160b", 00:13:21.122 "is_configured": true, 00:13:21.122 "data_offset": 2048, 00:13:21.122 "data_size": 63488 00:13:21.122 }, 00:13:21.122 { 00:13:21.122 "name": "BaseBdev3", 00:13:21.122 "uuid": "e72919fe-7b68-56b6-8b54-5172ad249e01", 00:13:21.122 "is_configured": true, 00:13:21.122 "data_offset": 2048, 00:13:21.122 "data_size": 63488 00:13:21.122 }, 00:13:21.122 { 00:13:21.122 "name": "BaseBdev4", 00:13:21.122 "uuid": "ac17915d-a0fb-5635-b615-a1ca670201cc", 00:13:21.122 "is_configured": true, 00:13:21.122 "data_offset": 2048, 00:13:21.122 "data_size": 63488 00:13:21.122 } 00:13:21.122 ] 00:13:21.122 }' 00:13:21.122 04:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.122 04:29:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.122 [2024-12-13 04:29:21.060778] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:13:21.122 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:21.122 Zero copy mechanism will not be used. 00:13:21.122 Running I/O for 60 seconds... 00:13:21.692 04:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:21.692 04:29:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.692 04:29:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.692 [2024-12-13 04:29:21.405436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:21.692 04:29:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.692 04:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:21.692 [2024-12-13 04:29:21.455833] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:13:21.692 [2024-12-13 04:29:21.458300] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:21.692 [2024-12-13 04:29:21.572242] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:21.692 [2024-12-13 04:29:21.572643] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:21.952 [2024-12-13 04:29:21.785021] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:21.952 [2024-12-13 04:29:21.786225] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:22.211 128.00 IOPS, 384.00 MiB/s [2024-12-13T04:29:22.226Z] [2024-12-13 04:29:22.115943] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:22.469 [2024-12-13 04:29:22.339490] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:22.469 [2024-12-13 04:29:22.340639] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:22.469 04:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:22.469 04:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.469 04:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:22.469 04:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:22.469 04:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.469 04:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.469 04:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.469 04:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.469 04:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.729 04:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.729 04:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.729 "name": "raid_bdev1", 00:13:22.729 "uuid": "58a24e67-74bf-4378-8c82-82059e299ef0", 00:13:22.729 "strip_size_kb": 0, 00:13:22.729 "state": "online", 00:13:22.729 "raid_level": "raid1", 00:13:22.729 "superblock": true, 00:13:22.729 "num_base_bdevs": 4, 00:13:22.729 "num_base_bdevs_discovered": 4, 00:13:22.729 "num_base_bdevs_operational": 4, 00:13:22.729 "process": { 00:13:22.729 "type": "rebuild", 00:13:22.729 "target": "spare", 00:13:22.729 "progress": { 00:13:22.729 "blocks": 10240, 00:13:22.729 "percent": 16 00:13:22.729 } 00:13:22.729 }, 00:13:22.729 "base_bdevs_list": [ 00:13:22.729 { 00:13:22.729 "name": "spare", 00:13:22.729 "uuid": "a5cc6756-83e1-57fe-a30f-818b2729fcaf", 00:13:22.729 "is_configured": true, 00:13:22.729 "data_offset": 2048, 00:13:22.729 "data_size": 63488 00:13:22.729 }, 00:13:22.729 { 00:13:22.729 "name": "BaseBdev2", 00:13:22.729 "uuid": "737eba1e-3c56-551d-8707-87353cc2160b", 00:13:22.729 "is_configured": true, 00:13:22.729 "data_offset": 2048, 00:13:22.729 "data_size": 63488 00:13:22.729 }, 00:13:22.729 { 00:13:22.729 "name": "BaseBdev3", 00:13:22.729 "uuid": "e72919fe-7b68-56b6-8b54-5172ad249e01", 00:13:22.729 "is_configured": true, 00:13:22.729 "data_offset": 2048, 00:13:22.729 "data_size": 63488 00:13:22.729 }, 00:13:22.729 { 00:13:22.729 "name": "BaseBdev4", 00:13:22.729 "uuid": "ac17915d-a0fb-5635-b615-a1ca670201cc", 00:13:22.729 "is_configured": true, 00:13:22.729 "data_offset": 2048, 00:13:22.729 "data_size": 63488 00:13:22.729 } 00:13:22.729 ] 00:13:22.729 }' 00:13:22.729 04:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.729 04:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:22.729 04:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.729 04:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:22.729 04:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:22.729 04:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.729 04:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.729 [2024-12-13 04:29:22.593184] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:22.729 [2024-12-13 04:29:22.707735] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:22.729 [2024-12-13 04:29:22.728370] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:22.729 [2024-12-13 04:29:22.728428] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:22.729 [2024-12-13 04:29:22.728469] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:22.989 [2024-12-13 04:29:22.751270] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002870 00:13:22.989 04:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.989 04:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:22.989 04:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:22.989 04:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:22.989 04:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:22.989 04:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:22.989 04:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:22.989 04:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.989 04:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.989 04:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.989 04:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.989 04:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.989 04:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.989 04:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.989 04:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.989 04:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.989 04:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.989 "name": "raid_bdev1", 00:13:22.989 "uuid": "58a24e67-74bf-4378-8c82-82059e299ef0", 00:13:22.989 "strip_size_kb": 0, 00:13:22.989 "state": "online", 00:13:22.989 "raid_level": "raid1", 00:13:22.989 "superblock": true, 00:13:22.989 "num_base_bdevs": 4, 00:13:22.989 "num_base_bdevs_discovered": 3, 00:13:22.989 "num_base_bdevs_operational": 3, 00:13:22.989 "base_bdevs_list": [ 00:13:22.989 { 00:13:22.989 "name": null, 00:13:22.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.989 "is_configured": false, 00:13:22.989 "data_offset": 0, 00:13:22.989 "data_size": 63488 00:13:22.989 }, 00:13:22.989 { 00:13:22.989 "name": "BaseBdev2", 00:13:22.989 "uuid": "737eba1e-3c56-551d-8707-87353cc2160b", 00:13:22.989 "is_configured": true, 00:13:22.989 "data_offset": 2048, 00:13:22.989 "data_size": 63488 00:13:22.989 }, 00:13:22.989 { 00:13:22.989 "name": "BaseBdev3", 00:13:22.989 "uuid": "e72919fe-7b68-56b6-8b54-5172ad249e01", 00:13:22.989 "is_configured": true, 00:13:22.989 "data_offset": 2048, 00:13:22.989 "data_size": 63488 00:13:22.989 }, 00:13:22.989 { 00:13:22.989 "name": "BaseBdev4", 00:13:22.989 "uuid": "ac17915d-a0fb-5635-b615-a1ca670201cc", 00:13:22.989 "is_configured": true, 00:13:22.989 "data_offset": 2048, 00:13:22.989 "data_size": 63488 00:13:22.989 } 00:13:22.989 ] 00:13:22.989 }' 00:13:22.989 04:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.989 04:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.248 120.50 IOPS, 361.50 MiB/s [2024-12-13T04:29:23.263Z] 04:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:23.248 04:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.248 04:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:23.248 04:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:23.248 04:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.248 04:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.248 04:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.248 04:29:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.248 04:29:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.508 04:29:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.508 04:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.508 "name": "raid_bdev1", 00:13:23.508 "uuid": "58a24e67-74bf-4378-8c82-82059e299ef0", 00:13:23.508 "strip_size_kb": 0, 00:13:23.508 "state": "online", 00:13:23.508 "raid_level": "raid1", 00:13:23.508 "superblock": true, 00:13:23.508 "num_base_bdevs": 4, 00:13:23.508 "num_base_bdevs_discovered": 3, 00:13:23.508 "num_base_bdevs_operational": 3, 00:13:23.508 "base_bdevs_list": [ 00:13:23.508 { 00:13:23.508 "name": null, 00:13:23.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.508 "is_configured": false, 00:13:23.508 "data_offset": 0, 00:13:23.508 "data_size": 63488 00:13:23.508 }, 00:13:23.508 { 00:13:23.508 "name": "BaseBdev2", 00:13:23.508 "uuid": "737eba1e-3c56-551d-8707-87353cc2160b", 00:13:23.508 "is_configured": true, 00:13:23.508 "data_offset": 2048, 00:13:23.508 "data_size": 63488 00:13:23.508 }, 00:13:23.508 { 00:13:23.508 "name": "BaseBdev3", 00:13:23.508 "uuid": "e72919fe-7b68-56b6-8b54-5172ad249e01", 00:13:23.508 "is_configured": true, 00:13:23.508 "data_offset": 2048, 00:13:23.508 "data_size": 63488 00:13:23.508 }, 00:13:23.508 { 00:13:23.508 "name": "BaseBdev4", 00:13:23.508 "uuid": "ac17915d-a0fb-5635-b615-a1ca670201cc", 00:13:23.508 "is_configured": true, 00:13:23.508 "data_offset": 2048, 00:13:23.508 "data_size": 63488 00:13:23.508 } 00:13:23.508 ] 00:13:23.508 }' 00:13:23.508 04:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:23.508 04:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:23.508 04:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:23.508 04:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:23.508 04:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:23.508 04:29:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.508 04:29:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.508 [2024-12-13 04:29:23.411425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:23.508 04:29:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.508 04:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:23.508 [2024-12-13 04:29:23.462546] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:13:23.508 [2024-12-13 04:29:23.464826] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:23.768 [2024-12-13 04:29:23.584357] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:23.768 [2024-12-13 04:29:23.586617] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:24.028 [2024-12-13 04:29:23.837469] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:24.028 [2024-12-13 04:29:23.837948] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:24.287 136.33 IOPS, 409.00 MiB/s [2024-12-13T04:29:24.302Z] [2024-12-13 04:29:24.093029] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:24.547 04:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:24.547 04:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:24.547 04:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:24.547 04:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:24.547 04:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:24.547 04:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.547 04:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.547 04:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.547 04:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.547 04:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.547 04:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:24.547 "name": "raid_bdev1", 00:13:24.547 "uuid": "58a24e67-74bf-4378-8c82-82059e299ef0", 00:13:24.547 "strip_size_kb": 0, 00:13:24.547 "state": "online", 00:13:24.547 "raid_level": "raid1", 00:13:24.547 "superblock": true, 00:13:24.547 "num_base_bdevs": 4, 00:13:24.547 "num_base_bdevs_discovered": 4, 00:13:24.547 "num_base_bdevs_operational": 4, 00:13:24.547 "process": { 00:13:24.547 "type": "rebuild", 00:13:24.547 "target": "spare", 00:13:24.547 "progress": { 00:13:24.547 "blocks": 12288, 00:13:24.547 "percent": 19 00:13:24.547 } 00:13:24.547 }, 00:13:24.547 "base_bdevs_list": [ 00:13:24.547 { 00:13:24.547 "name": "spare", 00:13:24.547 "uuid": "a5cc6756-83e1-57fe-a30f-818b2729fcaf", 00:13:24.547 "is_configured": true, 00:13:24.547 "data_offset": 2048, 00:13:24.547 "data_size": 63488 00:13:24.547 }, 00:13:24.547 { 00:13:24.547 "name": "BaseBdev2", 00:13:24.547 "uuid": "737eba1e-3c56-551d-8707-87353cc2160b", 00:13:24.547 "is_configured": true, 00:13:24.547 "data_offset": 2048, 00:13:24.547 "data_size": 63488 00:13:24.547 }, 00:13:24.547 { 00:13:24.547 "name": "BaseBdev3", 00:13:24.547 "uuid": "e72919fe-7b68-56b6-8b54-5172ad249e01", 00:13:24.547 "is_configured": true, 00:13:24.547 "data_offset": 2048, 00:13:24.547 "data_size": 63488 00:13:24.547 }, 00:13:24.547 { 00:13:24.547 "name": "BaseBdev4", 00:13:24.547 "uuid": "ac17915d-a0fb-5635-b615-a1ca670201cc", 00:13:24.547 "is_configured": true, 00:13:24.547 "data_offset": 2048, 00:13:24.547 "data_size": 63488 00:13:24.547 } 00:13:24.547 ] 00:13:24.547 }' 00:13:24.547 04:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:24.547 04:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:24.807 04:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:24.807 [2024-12-13 04:29:24.584850] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:24.807 04:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:24.807 04:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:24.807 04:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:24.807 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:24.807 04:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:24.807 04:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:24.807 04:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:24.807 04:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:24.807 04:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.807 04:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.807 [2024-12-13 04:29:24.620291] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:25.067 [2024-12-13 04:29:24.954427] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002870 00:13:25.067 [2024-12-13 04:29:24.954532] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002a10 00:13:25.067 04:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.067 04:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:25.067 04:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:25.067 04:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:25.067 04:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.067 04:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:25.067 04:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:25.067 04:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.067 04:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.067 04:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.067 04:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.067 04:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.067 04:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.067 04:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.067 "name": "raid_bdev1", 00:13:25.067 "uuid": "58a24e67-74bf-4378-8c82-82059e299ef0", 00:13:25.067 "strip_size_kb": 0, 00:13:25.067 "state": "online", 00:13:25.067 "raid_level": "raid1", 00:13:25.067 "superblock": true, 00:13:25.067 "num_base_bdevs": 4, 00:13:25.067 "num_base_bdevs_discovered": 3, 00:13:25.067 "num_base_bdevs_operational": 3, 00:13:25.067 "process": { 00:13:25.067 "type": "rebuild", 00:13:25.067 "target": "spare", 00:13:25.067 "progress": { 00:13:25.067 "blocks": 18432, 00:13:25.067 "percent": 29 00:13:25.067 } 00:13:25.067 }, 00:13:25.067 "base_bdevs_list": [ 00:13:25.067 { 00:13:25.067 "name": "spare", 00:13:25.067 "uuid": "a5cc6756-83e1-57fe-a30f-818b2729fcaf", 00:13:25.067 "is_configured": true, 00:13:25.067 "data_offset": 2048, 00:13:25.067 "data_size": 63488 00:13:25.067 }, 00:13:25.067 { 00:13:25.067 "name": null, 00:13:25.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.067 "is_configured": false, 00:13:25.067 "data_offset": 0, 00:13:25.067 "data_size": 63488 00:13:25.067 }, 00:13:25.067 { 00:13:25.067 "name": "BaseBdev3", 00:13:25.067 "uuid": "e72919fe-7b68-56b6-8b54-5172ad249e01", 00:13:25.067 "is_configured": true, 00:13:25.067 "data_offset": 2048, 00:13:25.067 "data_size": 63488 00:13:25.067 }, 00:13:25.067 { 00:13:25.067 "name": "BaseBdev4", 00:13:25.067 "uuid": "ac17915d-a0fb-5635-b615-a1ca670201cc", 00:13:25.067 "is_configured": true, 00:13:25.067 "data_offset": 2048, 00:13:25.067 "data_size": 63488 00:13:25.067 } 00:13:25.067 ] 00:13:25.067 }' 00:13:25.067 04:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.067 130.00 IOPS, 390.00 MiB/s [2024-12-13T04:29:25.082Z] 04:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:25.067 04:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.340 04:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:25.340 04:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=414 00:13:25.340 04:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:25.340 04:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:25.340 04:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.340 04:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:25.340 04:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:25.340 04:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.340 04:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.340 04:29:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.340 04:29:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.340 04:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.340 04:29:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.340 04:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.340 "name": "raid_bdev1", 00:13:25.340 "uuid": "58a24e67-74bf-4378-8c82-82059e299ef0", 00:13:25.340 "strip_size_kb": 0, 00:13:25.340 "state": "online", 00:13:25.340 "raid_level": "raid1", 00:13:25.340 "superblock": true, 00:13:25.340 "num_base_bdevs": 4, 00:13:25.340 "num_base_bdevs_discovered": 3, 00:13:25.340 "num_base_bdevs_operational": 3, 00:13:25.340 "process": { 00:13:25.340 "type": "rebuild", 00:13:25.340 "target": "spare", 00:13:25.340 "progress": { 00:13:25.340 "blocks": 20480, 00:13:25.340 "percent": 32 00:13:25.340 } 00:13:25.340 }, 00:13:25.340 "base_bdevs_list": [ 00:13:25.340 { 00:13:25.340 "name": "spare", 00:13:25.340 "uuid": "a5cc6756-83e1-57fe-a30f-818b2729fcaf", 00:13:25.340 "is_configured": true, 00:13:25.340 "data_offset": 2048, 00:13:25.340 "data_size": 63488 00:13:25.340 }, 00:13:25.340 { 00:13:25.340 "name": null, 00:13:25.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.340 "is_configured": false, 00:13:25.340 "data_offset": 0, 00:13:25.340 "data_size": 63488 00:13:25.340 }, 00:13:25.340 { 00:13:25.340 "name": "BaseBdev3", 00:13:25.340 "uuid": "e72919fe-7b68-56b6-8b54-5172ad249e01", 00:13:25.340 "is_configured": true, 00:13:25.340 "data_offset": 2048, 00:13:25.340 "data_size": 63488 00:13:25.340 }, 00:13:25.340 { 00:13:25.340 "name": "BaseBdev4", 00:13:25.340 "uuid": "ac17915d-a0fb-5635-b615-a1ca670201cc", 00:13:25.340 "is_configured": true, 00:13:25.340 "data_offset": 2048, 00:13:25.340 "data_size": 63488 00:13:25.340 } 00:13:25.340 ] 00:13:25.340 }' 00:13:25.340 04:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.340 04:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:25.340 04:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.340 04:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:25.340 04:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:25.600 [2024-12-13 04:29:25.409498] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:25.860 [2024-12-13 04:29:25.634111] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:25.860 [2024-12-13 04:29:25.870226] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:26.120 [2024-12-13 04:29:25.972734] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:26.120 [2024-12-13 04:29:25.973117] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:26.381 118.20 IOPS, 354.60 MiB/s [2024-12-13T04:29:26.396Z] 04:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:26.381 04:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:26.381 04:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:26.381 04:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:26.381 04:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:26.381 04:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:26.381 04:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.381 04:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.381 04:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.381 04:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.381 04:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.381 [2024-12-13 04:29:26.314482] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:26.381 04:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:26.381 "name": "raid_bdev1", 00:13:26.381 "uuid": "58a24e67-74bf-4378-8c82-82059e299ef0", 00:13:26.381 "strip_size_kb": 0, 00:13:26.381 "state": "online", 00:13:26.381 "raid_level": "raid1", 00:13:26.381 "superblock": true, 00:13:26.381 "num_base_bdevs": 4, 00:13:26.381 "num_base_bdevs_discovered": 3, 00:13:26.381 "num_base_bdevs_operational": 3, 00:13:26.381 "process": { 00:13:26.381 "type": "rebuild", 00:13:26.381 "target": "spare", 00:13:26.381 "progress": { 00:13:26.381 "blocks": 36864, 00:13:26.381 "percent": 58 00:13:26.381 } 00:13:26.381 }, 00:13:26.381 "base_bdevs_list": [ 00:13:26.381 { 00:13:26.381 "name": "spare", 00:13:26.381 "uuid": "a5cc6756-83e1-57fe-a30f-818b2729fcaf", 00:13:26.381 "is_configured": true, 00:13:26.381 "data_offset": 2048, 00:13:26.381 "data_size": 63488 00:13:26.381 }, 00:13:26.381 { 00:13:26.381 "name": null, 00:13:26.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.381 "is_configured": false, 00:13:26.381 "data_offset": 0, 00:13:26.381 "data_size": 63488 00:13:26.381 }, 00:13:26.381 { 00:13:26.381 "name": "BaseBdev3", 00:13:26.381 "uuid": "e72919fe-7b68-56b6-8b54-5172ad249e01", 00:13:26.381 "is_configured": true, 00:13:26.381 "data_offset": 2048, 00:13:26.381 "data_size": 63488 00:13:26.381 }, 00:13:26.381 { 00:13:26.381 "name": "BaseBdev4", 00:13:26.381 "uuid": "ac17915d-a0fb-5635-b615-a1ca670201cc", 00:13:26.381 "is_configured": true, 00:13:26.381 "data_offset": 2048, 00:13:26.381 "data_size": 63488 00:13:26.381 } 00:13:26.381 ] 00:13:26.381 }' 00:13:26.381 [2024-12-13 04:29:26.315862] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:26.381 04:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:26.381 04:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:26.381 04:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:26.641 04:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:26.641 04:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:27.471 105.00 IOPS, 315.00 MiB/s [2024-12-13T04:29:27.486Z] 04:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:27.471 04:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:27.471 04:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.471 04:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:27.471 04:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:27.471 04:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.471 04:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.471 04:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.471 04:29:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.471 04:29:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.471 [2024-12-13 04:29:27.447604] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:27.471 04:29:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.471 04:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.471 "name": "raid_bdev1", 00:13:27.471 "uuid": "58a24e67-74bf-4378-8c82-82059e299ef0", 00:13:27.471 "strip_size_kb": 0, 00:13:27.471 "state": "online", 00:13:27.471 "raid_level": "raid1", 00:13:27.471 "superblock": true, 00:13:27.471 "num_base_bdevs": 4, 00:13:27.471 "num_base_bdevs_discovered": 3, 00:13:27.471 "num_base_bdevs_operational": 3, 00:13:27.471 "process": { 00:13:27.471 "type": "rebuild", 00:13:27.471 "target": "spare", 00:13:27.471 "progress": { 00:13:27.471 "blocks": 55296, 00:13:27.471 "percent": 87 00:13:27.471 } 00:13:27.471 }, 00:13:27.471 "base_bdevs_list": [ 00:13:27.471 { 00:13:27.471 "name": "spare", 00:13:27.471 "uuid": "a5cc6756-83e1-57fe-a30f-818b2729fcaf", 00:13:27.471 "is_configured": true, 00:13:27.471 "data_offset": 2048, 00:13:27.471 "data_size": 63488 00:13:27.471 }, 00:13:27.471 { 00:13:27.471 "name": null, 00:13:27.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.471 "is_configured": false, 00:13:27.471 "data_offset": 0, 00:13:27.471 "data_size": 63488 00:13:27.471 }, 00:13:27.471 { 00:13:27.471 "name": "BaseBdev3", 00:13:27.471 "uuid": "e72919fe-7b68-56b6-8b54-5172ad249e01", 00:13:27.471 "is_configured": true, 00:13:27.471 "data_offset": 2048, 00:13:27.471 "data_size": 63488 00:13:27.471 }, 00:13:27.471 { 00:13:27.471 "name": "BaseBdev4", 00:13:27.471 "uuid": "ac17915d-a0fb-5635-b615-a1ca670201cc", 00:13:27.471 "is_configured": true, 00:13:27.471 "data_offset": 2048, 00:13:27.471 "data_size": 63488 00:13:27.471 } 00:13:27.471 ] 00:13:27.471 }' 00:13:27.471 04:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.731 04:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:27.731 04:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.731 04:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:27.731 04:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:27.731 [2024-12-13 04:29:27.661793] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:13:27.990 [2024-12-13 04:29:27.904476] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:28.250 [2024-12-13 04:29:28.009286] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:28.250 [2024-12-13 04:29:28.012759] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:28.821 95.86 IOPS, 287.57 MiB/s [2024-12-13T04:29:28.836Z] 04:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:28.821 04:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:28.821 04:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:28.821 04:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:28.821 04:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:28.821 04:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:28.821 04:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.821 04:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.821 04:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.821 04:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.821 04:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.821 04:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:28.821 "name": "raid_bdev1", 00:13:28.821 "uuid": "58a24e67-74bf-4378-8c82-82059e299ef0", 00:13:28.821 "strip_size_kb": 0, 00:13:28.821 "state": "online", 00:13:28.821 "raid_level": "raid1", 00:13:28.821 "superblock": true, 00:13:28.821 "num_base_bdevs": 4, 00:13:28.821 "num_base_bdevs_discovered": 3, 00:13:28.821 "num_base_bdevs_operational": 3, 00:13:28.821 "base_bdevs_list": [ 00:13:28.821 { 00:13:28.821 "name": "spare", 00:13:28.821 "uuid": "a5cc6756-83e1-57fe-a30f-818b2729fcaf", 00:13:28.821 "is_configured": true, 00:13:28.821 "data_offset": 2048, 00:13:28.821 "data_size": 63488 00:13:28.821 }, 00:13:28.821 { 00:13:28.821 "name": null, 00:13:28.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.821 "is_configured": false, 00:13:28.821 "data_offset": 0, 00:13:28.821 "data_size": 63488 00:13:28.821 }, 00:13:28.821 { 00:13:28.821 "name": "BaseBdev3", 00:13:28.821 "uuid": "e72919fe-7b68-56b6-8b54-5172ad249e01", 00:13:28.821 "is_configured": true, 00:13:28.821 "data_offset": 2048, 00:13:28.821 "data_size": 63488 00:13:28.821 }, 00:13:28.821 { 00:13:28.821 "name": "BaseBdev4", 00:13:28.821 "uuid": "ac17915d-a0fb-5635-b615-a1ca670201cc", 00:13:28.821 "is_configured": true, 00:13:28.821 "data_offset": 2048, 00:13:28.821 "data_size": 63488 00:13:28.821 } 00:13:28.821 ] 00:13:28.821 }' 00:13:28.821 04:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:28.821 04:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:28.821 04:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:28.821 04:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:28.821 04:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:28.821 04:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:28.821 04:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:28.821 04:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:28.821 04:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:28.821 04:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:28.821 04:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.821 04:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.821 04:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.821 04:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.821 04:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.821 04:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:28.821 "name": "raid_bdev1", 00:13:28.821 "uuid": "58a24e67-74bf-4378-8c82-82059e299ef0", 00:13:28.821 "strip_size_kb": 0, 00:13:28.821 "state": "online", 00:13:28.821 "raid_level": "raid1", 00:13:28.821 "superblock": true, 00:13:28.821 "num_base_bdevs": 4, 00:13:28.821 "num_base_bdevs_discovered": 3, 00:13:28.821 "num_base_bdevs_operational": 3, 00:13:28.821 "base_bdevs_list": [ 00:13:28.821 { 00:13:28.821 "name": "spare", 00:13:28.821 "uuid": "a5cc6756-83e1-57fe-a30f-818b2729fcaf", 00:13:28.821 "is_configured": true, 00:13:28.821 "data_offset": 2048, 00:13:28.821 "data_size": 63488 00:13:28.821 }, 00:13:28.821 { 00:13:28.821 "name": null, 00:13:28.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.821 "is_configured": false, 00:13:28.821 "data_offset": 0, 00:13:28.821 "data_size": 63488 00:13:28.821 }, 00:13:28.821 { 00:13:28.821 "name": "BaseBdev3", 00:13:28.821 "uuid": "e72919fe-7b68-56b6-8b54-5172ad249e01", 00:13:28.821 "is_configured": true, 00:13:28.821 "data_offset": 2048, 00:13:28.821 "data_size": 63488 00:13:28.821 }, 00:13:28.821 { 00:13:28.821 "name": "BaseBdev4", 00:13:28.821 "uuid": "ac17915d-a0fb-5635-b615-a1ca670201cc", 00:13:28.821 "is_configured": true, 00:13:28.821 "data_offset": 2048, 00:13:28.821 "data_size": 63488 00:13:28.821 } 00:13:28.821 ] 00:13:28.821 }' 00:13:28.821 04:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:28.821 04:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:28.821 04:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:29.081 04:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:29.081 04:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:29.081 04:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:29.081 04:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:29.081 04:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:29.081 04:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:29.081 04:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:29.081 04:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.081 04:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.081 04:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.081 04:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.081 04:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.081 04:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.081 04:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.081 04:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.081 04:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.081 04:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.081 "name": "raid_bdev1", 00:13:29.081 "uuid": "58a24e67-74bf-4378-8c82-82059e299ef0", 00:13:29.081 "strip_size_kb": 0, 00:13:29.081 "state": "online", 00:13:29.081 "raid_level": "raid1", 00:13:29.081 "superblock": true, 00:13:29.081 "num_base_bdevs": 4, 00:13:29.081 "num_base_bdevs_discovered": 3, 00:13:29.081 "num_base_bdevs_operational": 3, 00:13:29.081 "base_bdevs_list": [ 00:13:29.081 { 00:13:29.081 "name": "spare", 00:13:29.081 "uuid": "a5cc6756-83e1-57fe-a30f-818b2729fcaf", 00:13:29.081 "is_configured": true, 00:13:29.081 "data_offset": 2048, 00:13:29.081 "data_size": 63488 00:13:29.081 }, 00:13:29.081 { 00:13:29.081 "name": null, 00:13:29.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.081 "is_configured": false, 00:13:29.081 "data_offset": 0, 00:13:29.081 "data_size": 63488 00:13:29.081 }, 00:13:29.081 { 00:13:29.081 "name": "BaseBdev3", 00:13:29.081 "uuid": "e72919fe-7b68-56b6-8b54-5172ad249e01", 00:13:29.081 "is_configured": true, 00:13:29.081 "data_offset": 2048, 00:13:29.081 "data_size": 63488 00:13:29.081 }, 00:13:29.081 { 00:13:29.081 "name": "BaseBdev4", 00:13:29.081 "uuid": "ac17915d-a0fb-5635-b615-a1ca670201cc", 00:13:29.081 "is_configured": true, 00:13:29.081 "data_offset": 2048, 00:13:29.081 "data_size": 63488 00:13:29.081 } 00:13:29.081 ] 00:13:29.081 }' 00:13:29.081 04:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.081 04:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.344 88.88 IOPS, 266.62 MiB/s [2024-12-13T04:29:29.359Z] 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:29.344 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.344 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.344 [2024-12-13 04:29:29.329011] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:29.344 [2024-12-13 04:29:29.329113] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:29.611 00:13:29.611 Latency(us) 00:13:29.611 [2024-12-13T04:29:29.626Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:29.611 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:29.611 raid_bdev1 : 8.38 86.02 258.06 0.00 0.00 16670.91 273.66 119968.08 00:13:29.611 [2024-12-13T04:29:29.626Z] =================================================================================================================== 00:13:29.611 [2024-12-13T04:29:29.626Z] Total : 86.02 258.06 0.00 0.00 16670.91 273.66 119968.08 00:13:29.611 [2024-12-13 04:29:29.432350] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:29.611 { 00:13:29.611 "results": [ 00:13:29.611 { 00:13:29.611 "job": "raid_bdev1", 00:13:29.611 "core_mask": "0x1", 00:13:29.611 "workload": "randrw", 00:13:29.611 "percentage": 50, 00:13:29.611 "status": "finished", 00:13:29.611 "queue_depth": 2, 00:13:29.611 "io_size": 3145728, 00:13:29.611 "runtime": 8.381622, 00:13:29.611 "iops": 86.02153616567294, 00:13:29.611 "mibps": 258.0646084970188, 00:13:29.611 "io_failed": 0, 00:13:29.611 "io_timeout": 0, 00:13:29.611 "avg_latency_us": 16670.910276241757, 00:13:29.611 "min_latency_us": 273.6628820960699, 00:13:29.611 "max_latency_us": 119968.08384279476 00:13:29.611 } 00:13:29.611 ], 00:13:29.611 "core_count": 1 00:13:29.611 } 00:13:29.611 [2024-12-13 04:29:29.432489] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:29.611 [2024-12-13 04:29:29.432620] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:29.611 [2024-12-13 04:29:29.432631] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:13:29.611 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.611 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.611 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:29.611 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.611 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.611 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.611 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:29.611 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:29.611 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:29.611 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:29.611 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:29.611 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:29.611 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:29.611 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:29.611 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:29.611 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:29.611 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:29.611 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:29.611 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:29.890 /dev/nbd0 00:13:29.890 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:29.890 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:29.890 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:29.890 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:29.890 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:29.890 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:29.890 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:29.890 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:29.890 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:29.890 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:29.890 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:29.890 1+0 records in 00:13:29.890 1+0 records out 00:13:29.890 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000586919 s, 7.0 MB/s 00:13:29.890 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.890 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:29.890 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.890 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:29.890 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:29.890 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:29.890 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:29.890 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:29.890 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:13:29.890 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:13:29.890 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:29.890 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:13:29.890 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:13:29.890 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:29.890 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:13:29.890 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:29.890 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:29.890 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:29.890 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:29.890 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:29.890 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:29.890 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:13:30.167 /dev/nbd1 00:13:30.167 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:30.167 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:30.167 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:30.167 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:30.167 04:29:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:30.167 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:30.167 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:30.167 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:30.167 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:30.167 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:30.167 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:30.167 1+0 records in 00:13:30.167 1+0 records out 00:13:30.167 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000474642 s, 8.6 MB/s 00:13:30.167 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:30.167 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:30.167 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:30.167 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:30.167 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:30.167 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:30.167 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:30.167 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:30.167 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:30.167 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:30.167 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:30.167 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:30.167 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:30.167 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:30.167 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:30.428 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:30.428 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:30.428 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:30.428 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:30.428 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:30.428 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:30.428 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:30.428 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:30.428 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:30.428 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:13:30.428 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:13:30.428 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:30.428 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:13:30.428 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:30.428 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:30.428 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:30.428 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:30.428 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:30.428 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:30.428 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:13:30.688 /dev/nbd1 00:13:30.688 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:30.688 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:30.688 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:30.688 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:30.688 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:30.688 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:30.688 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:30.688 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:30.688 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:30.688 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:30.688 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:30.688 1+0 records in 00:13:30.688 1+0 records out 00:13:30.688 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00042917 s, 9.5 MB/s 00:13:30.688 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:30.688 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:30.688 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:30.688 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:30.688 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:30.688 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:30.688 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:30.688 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:30.688 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:30.688 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:30.688 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:30.688 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:30.688 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:30.688 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:30.688 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:30.948 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:30.948 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:30.948 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:30.948 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:30.948 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:30.948 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:30.948 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:30.948 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:30.948 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:30.948 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:30.948 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:30.948 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:30.948 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:30.948 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:30.948 04:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:31.208 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:31.208 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:31.208 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:31.208 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:31.209 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:31.209 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:31.209 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:31.209 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:31.209 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:31.209 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:31.209 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.209 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.209 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.209 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:31.209 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.209 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.209 [2024-12-13 04:29:31.086339] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:31.209 [2024-12-13 04:29:31.086454] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:31.209 [2024-12-13 04:29:31.086484] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:31.209 [2024-12-13 04:29:31.086494] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:31.209 [2024-12-13 04:29:31.088950] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:31.209 [2024-12-13 04:29:31.088990] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:31.209 [2024-12-13 04:29:31.089073] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:31.209 [2024-12-13 04:29:31.089122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:31.209 [2024-12-13 04:29:31.089246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:31.209 [2024-12-13 04:29:31.089334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:31.209 spare 00:13:31.209 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.209 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:31.209 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.209 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.209 [2024-12-13 04:29:31.189227] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:13:31.209 [2024-12-13 04:29:31.189293] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:31.209 [2024-12-13 04:29:31.189638] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000337b0 00:13:31.209 [2024-12-13 04:29:31.189821] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:13:31.209 [2024-12-13 04:29:31.189865] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:13:31.209 [2024-12-13 04:29:31.190011] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:31.209 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.209 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:31.209 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:31.209 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:31.209 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:31.209 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:31.209 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:31.209 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.209 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.209 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.209 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.209 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.209 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.209 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.209 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.209 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.468 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.468 "name": "raid_bdev1", 00:13:31.468 "uuid": "58a24e67-74bf-4378-8c82-82059e299ef0", 00:13:31.468 "strip_size_kb": 0, 00:13:31.468 "state": "online", 00:13:31.468 "raid_level": "raid1", 00:13:31.468 "superblock": true, 00:13:31.468 "num_base_bdevs": 4, 00:13:31.468 "num_base_bdevs_discovered": 3, 00:13:31.468 "num_base_bdevs_operational": 3, 00:13:31.468 "base_bdevs_list": [ 00:13:31.468 { 00:13:31.468 "name": "spare", 00:13:31.469 "uuid": "a5cc6756-83e1-57fe-a30f-818b2729fcaf", 00:13:31.469 "is_configured": true, 00:13:31.469 "data_offset": 2048, 00:13:31.469 "data_size": 63488 00:13:31.469 }, 00:13:31.469 { 00:13:31.469 "name": null, 00:13:31.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.469 "is_configured": false, 00:13:31.469 "data_offset": 2048, 00:13:31.469 "data_size": 63488 00:13:31.469 }, 00:13:31.469 { 00:13:31.469 "name": "BaseBdev3", 00:13:31.469 "uuid": "e72919fe-7b68-56b6-8b54-5172ad249e01", 00:13:31.469 "is_configured": true, 00:13:31.469 "data_offset": 2048, 00:13:31.469 "data_size": 63488 00:13:31.469 }, 00:13:31.469 { 00:13:31.469 "name": "BaseBdev4", 00:13:31.469 "uuid": "ac17915d-a0fb-5635-b615-a1ca670201cc", 00:13:31.469 "is_configured": true, 00:13:31.469 "data_offset": 2048, 00:13:31.469 "data_size": 63488 00:13:31.469 } 00:13:31.469 ] 00:13:31.469 }' 00:13:31.469 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.469 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.729 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:31.729 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:31.729 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:31.729 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:31.729 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:31.729 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.729 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.729 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.729 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.729 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.729 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:31.729 "name": "raid_bdev1", 00:13:31.729 "uuid": "58a24e67-74bf-4378-8c82-82059e299ef0", 00:13:31.729 "strip_size_kb": 0, 00:13:31.729 "state": "online", 00:13:31.729 "raid_level": "raid1", 00:13:31.729 "superblock": true, 00:13:31.729 "num_base_bdevs": 4, 00:13:31.729 "num_base_bdevs_discovered": 3, 00:13:31.729 "num_base_bdevs_operational": 3, 00:13:31.729 "base_bdevs_list": [ 00:13:31.729 { 00:13:31.729 "name": "spare", 00:13:31.729 "uuid": "a5cc6756-83e1-57fe-a30f-818b2729fcaf", 00:13:31.729 "is_configured": true, 00:13:31.729 "data_offset": 2048, 00:13:31.729 "data_size": 63488 00:13:31.729 }, 00:13:31.729 { 00:13:31.729 "name": null, 00:13:31.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.729 "is_configured": false, 00:13:31.729 "data_offset": 2048, 00:13:31.729 "data_size": 63488 00:13:31.729 }, 00:13:31.729 { 00:13:31.729 "name": "BaseBdev3", 00:13:31.729 "uuid": "e72919fe-7b68-56b6-8b54-5172ad249e01", 00:13:31.729 "is_configured": true, 00:13:31.729 "data_offset": 2048, 00:13:31.729 "data_size": 63488 00:13:31.729 }, 00:13:31.729 { 00:13:31.729 "name": "BaseBdev4", 00:13:31.729 "uuid": "ac17915d-a0fb-5635-b615-a1ca670201cc", 00:13:31.729 "is_configured": true, 00:13:31.729 "data_offset": 2048, 00:13:31.729 "data_size": 63488 00:13:31.729 } 00:13:31.729 ] 00:13:31.729 }' 00:13:31.729 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:31.729 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:31.729 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:31.729 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:31.729 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.729 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:31.729 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.729 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.989 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.989 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:31.989 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:31.989 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.989 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.989 [2024-12-13 04:29:31.789195] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:31.989 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.989 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:31.989 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:31.989 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:31.989 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:31.989 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:31.989 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:31.989 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.989 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.989 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.989 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.989 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.989 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.989 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.989 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.989 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.989 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.989 "name": "raid_bdev1", 00:13:31.989 "uuid": "58a24e67-74bf-4378-8c82-82059e299ef0", 00:13:31.989 "strip_size_kb": 0, 00:13:31.989 "state": "online", 00:13:31.989 "raid_level": "raid1", 00:13:31.989 "superblock": true, 00:13:31.989 "num_base_bdevs": 4, 00:13:31.989 "num_base_bdevs_discovered": 2, 00:13:31.989 "num_base_bdevs_operational": 2, 00:13:31.989 "base_bdevs_list": [ 00:13:31.989 { 00:13:31.989 "name": null, 00:13:31.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.989 "is_configured": false, 00:13:31.989 "data_offset": 0, 00:13:31.989 "data_size": 63488 00:13:31.989 }, 00:13:31.989 { 00:13:31.989 "name": null, 00:13:31.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.989 "is_configured": false, 00:13:31.989 "data_offset": 2048, 00:13:31.989 "data_size": 63488 00:13:31.989 }, 00:13:31.989 { 00:13:31.989 "name": "BaseBdev3", 00:13:31.989 "uuid": "e72919fe-7b68-56b6-8b54-5172ad249e01", 00:13:31.989 "is_configured": true, 00:13:31.989 "data_offset": 2048, 00:13:31.989 "data_size": 63488 00:13:31.989 }, 00:13:31.989 { 00:13:31.989 "name": "BaseBdev4", 00:13:31.989 "uuid": "ac17915d-a0fb-5635-b615-a1ca670201cc", 00:13:31.989 "is_configured": true, 00:13:31.989 "data_offset": 2048, 00:13:31.989 "data_size": 63488 00:13:31.989 } 00:13:31.989 ] 00:13:31.989 }' 00:13:31.989 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.989 04:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.559 04:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:32.559 04:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.559 04:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.559 [2024-12-13 04:29:32.296519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:32.559 [2024-12-13 04:29:32.296714] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:32.559 [2024-12-13 04:29:32.296794] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:32.559 [2024-12-13 04:29:32.296857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:32.559 [2024-12-13 04:29:32.304816] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000033880 00:13:32.559 04:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.559 04:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:32.559 [2024-12-13 04:29:32.307064] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:33.499 04:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:33.499 04:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:33.499 04:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:33.499 04:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:33.499 04:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:33.499 04:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.499 04:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.499 04:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.499 04:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.499 04:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.499 04:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:33.499 "name": "raid_bdev1", 00:13:33.499 "uuid": "58a24e67-74bf-4378-8c82-82059e299ef0", 00:13:33.499 "strip_size_kb": 0, 00:13:33.499 "state": "online", 00:13:33.499 "raid_level": "raid1", 00:13:33.499 "superblock": true, 00:13:33.499 "num_base_bdevs": 4, 00:13:33.499 "num_base_bdevs_discovered": 3, 00:13:33.499 "num_base_bdevs_operational": 3, 00:13:33.499 "process": { 00:13:33.499 "type": "rebuild", 00:13:33.499 "target": "spare", 00:13:33.499 "progress": { 00:13:33.499 "blocks": 20480, 00:13:33.499 "percent": 32 00:13:33.499 } 00:13:33.499 }, 00:13:33.499 "base_bdevs_list": [ 00:13:33.499 { 00:13:33.499 "name": "spare", 00:13:33.499 "uuid": "a5cc6756-83e1-57fe-a30f-818b2729fcaf", 00:13:33.499 "is_configured": true, 00:13:33.499 "data_offset": 2048, 00:13:33.499 "data_size": 63488 00:13:33.499 }, 00:13:33.499 { 00:13:33.499 "name": null, 00:13:33.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.499 "is_configured": false, 00:13:33.499 "data_offset": 2048, 00:13:33.499 "data_size": 63488 00:13:33.499 }, 00:13:33.499 { 00:13:33.499 "name": "BaseBdev3", 00:13:33.499 "uuid": "e72919fe-7b68-56b6-8b54-5172ad249e01", 00:13:33.499 "is_configured": true, 00:13:33.499 "data_offset": 2048, 00:13:33.499 "data_size": 63488 00:13:33.499 }, 00:13:33.499 { 00:13:33.499 "name": "BaseBdev4", 00:13:33.499 "uuid": "ac17915d-a0fb-5635-b615-a1ca670201cc", 00:13:33.499 "is_configured": true, 00:13:33.499 "data_offset": 2048, 00:13:33.499 "data_size": 63488 00:13:33.499 } 00:13:33.499 ] 00:13:33.499 }' 00:13:33.499 04:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:33.499 04:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:33.499 04:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:33.499 04:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:33.499 04:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:33.499 04:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.499 04:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.499 [2024-12-13 04:29:33.470955] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:33.759 [2024-12-13 04:29:33.514544] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:33.759 [2024-12-13 04:29:33.514666] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:33.759 [2024-12-13 04:29:33.514710] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:33.759 [2024-12-13 04:29:33.514721] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:33.759 04:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.759 04:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:33.759 04:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:33.759 04:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:33.759 04:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:33.759 04:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:33.759 04:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:33.759 04:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.759 04:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.759 04:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.759 04:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.759 04:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.759 04:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.759 04:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.759 04:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.759 04:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.759 04:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.759 "name": "raid_bdev1", 00:13:33.759 "uuid": "58a24e67-74bf-4378-8c82-82059e299ef0", 00:13:33.759 "strip_size_kb": 0, 00:13:33.759 "state": "online", 00:13:33.759 "raid_level": "raid1", 00:13:33.759 "superblock": true, 00:13:33.759 "num_base_bdevs": 4, 00:13:33.759 "num_base_bdevs_discovered": 2, 00:13:33.759 "num_base_bdevs_operational": 2, 00:13:33.759 "base_bdevs_list": [ 00:13:33.759 { 00:13:33.759 "name": null, 00:13:33.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.759 "is_configured": false, 00:13:33.759 "data_offset": 0, 00:13:33.759 "data_size": 63488 00:13:33.759 }, 00:13:33.759 { 00:13:33.759 "name": null, 00:13:33.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.759 "is_configured": false, 00:13:33.759 "data_offset": 2048, 00:13:33.759 "data_size": 63488 00:13:33.759 }, 00:13:33.759 { 00:13:33.759 "name": "BaseBdev3", 00:13:33.759 "uuid": "e72919fe-7b68-56b6-8b54-5172ad249e01", 00:13:33.759 "is_configured": true, 00:13:33.759 "data_offset": 2048, 00:13:33.759 "data_size": 63488 00:13:33.759 }, 00:13:33.759 { 00:13:33.759 "name": "BaseBdev4", 00:13:33.759 "uuid": "ac17915d-a0fb-5635-b615-a1ca670201cc", 00:13:33.759 "is_configured": true, 00:13:33.759 "data_offset": 2048, 00:13:33.759 "data_size": 63488 00:13:33.759 } 00:13:33.759 ] 00:13:33.759 }' 00:13:33.759 04:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.759 04:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.020 04:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:34.020 04:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.020 04:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.020 [2024-12-13 04:29:33.977155] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:34.020 [2024-12-13 04:29:33.977275] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.020 [2024-12-13 04:29:33.977322] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:13:34.020 [2024-12-13 04:29:33.977350] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.020 [2024-12-13 04:29:33.977866] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.020 [2024-12-13 04:29:33.977926] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:34.020 [2024-12-13 04:29:33.978045] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:34.020 [2024-12-13 04:29:33.978084] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:34.020 [2024-12-13 04:29:33.978142] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:34.020 [2024-12-13 04:29:33.978224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:34.020 [2024-12-13 04:29:33.983611] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000033950 00:13:34.020 spare 00:13:34.020 04:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.020 04:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:34.020 [2024-12-13 04:29:33.985833] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:35.401 04:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:35.401 04:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:35.401 04:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:35.401 04:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:35.401 04:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:35.401 04:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.401 04:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.401 04:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.401 04:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.401 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.401 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:35.401 "name": "raid_bdev1", 00:13:35.401 "uuid": "58a24e67-74bf-4378-8c82-82059e299ef0", 00:13:35.401 "strip_size_kb": 0, 00:13:35.401 "state": "online", 00:13:35.401 "raid_level": "raid1", 00:13:35.401 "superblock": true, 00:13:35.401 "num_base_bdevs": 4, 00:13:35.401 "num_base_bdevs_discovered": 3, 00:13:35.401 "num_base_bdevs_operational": 3, 00:13:35.401 "process": { 00:13:35.401 "type": "rebuild", 00:13:35.401 "target": "spare", 00:13:35.401 "progress": { 00:13:35.401 "blocks": 20480, 00:13:35.401 "percent": 32 00:13:35.401 } 00:13:35.401 }, 00:13:35.401 "base_bdevs_list": [ 00:13:35.401 { 00:13:35.401 "name": "spare", 00:13:35.401 "uuid": "a5cc6756-83e1-57fe-a30f-818b2729fcaf", 00:13:35.401 "is_configured": true, 00:13:35.401 "data_offset": 2048, 00:13:35.401 "data_size": 63488 00:13:35.401 }, 00:13:35.401 { 00:13:35.401 "name": null, 00:13:35.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.401 "is_configured": false, 00:13:35.401 "data_offset": 2048, 00:13:35.401 "data_size": 63488 00:13:35.401 }, 00:13:35.401 { 00:13:35.401 "name": "BaseBdev3", 00:13:35.401 "uuid": "e72919fe-7b68-56b6-8b54-5172ad249e01", 00:13:35.401 "is_configured": true, 00:13:35.401 "data_offset": 2048, 00:13:35.401 "data_size": 63488 00:13:35.401 }, 00:13:35.401 { 00:13:35.401 "name": "BaseBdev4", 00:13:35.401 "uuid": "ac17915d-a0fb-5635-b615-a1ca670201cc", 00:13:35.401 "is_configured": true, 00:13:35.401 "data_offset": 2048, 00:13:35.401 "data_size": 63488 00:13:35.401 } 00:13:35.401 ] 00:13:35.401 }' 00:13:35.401 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:35.401 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:35.401 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:35.401 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:35.401 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:35.401 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.401 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.401 [2024-12-13 04:29:35.138688] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:35.401 [2024-12-13 04:29:35.193543] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:35.401 [2024-12-13 04:29:35.193656] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.401 [2024-12-13 04:29:35.193691] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:35.401 [2024-12-13 04:29:35.193715] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:35.401 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.401 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:35.401 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.401 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.401 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.401 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.401 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:35.401 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.401 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.401 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.401 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.401 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.401 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.401 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.401 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.401 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.401 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.401 "name": "raid_bdev1", 00:13:35.401 "uuid": "58a24e67-74bf-4378-8c82-82059e299ef0", 00:13:35.401 "strip_size_kb": 0, 00:13:35.401 "state": "online", 00:13:35.401 "raid_level": "raid1", 00:13:35.401 "superblock": true, 00:13:35.401 "num_base_bdevs": 4, 00:13:35.401 "num_base_bdevs_discovered": 2, 00:13:35.401 "num_base_bdevs_operational": 2, 00:13:35.401 "base_bdevs_list": [ 00:13:35.401 { 00:13:35.401 "name": null, 00:13:35.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.401 "is_configured": false, 00:13:35.401 "data_offset": 0, 00:13:35.401 "data_size": 63488 00:13:35.401 }, 00:13:35.401 { 00:13:35.401 "name": null, 00:13:35.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.401 "is_configured": false, 00:13:35.401 "data_offset": 2048, 00:13:35.401 "data_size": 63488 00:13:35.401 }, 00:13:35.401 { 00:13:35.401 "name": "BaseBdev3", 00:13:35.401 "uuid": "e72919fe-7b68-56b6-8b54-5172ad249e01", 00:13:35.401 "is_configured": true, 00:13:35.401 "data_offset": 2048, 00:13:35.401 "data_size": 63488 00:13:35.401 }, 00:13:35.401 { 00:13:35.401 "name": "BaseBdev4", 00:13:35.401 "uuid": "ac17915d-a0fb-5635-b615-a1ca670201cc", 00:13:35.401 "is_configured": true, 00:13:35.401 "data_offset": 2048, 00:13:35.401 "data_size": 63488 00:13:35.401 } 00:13:35.401 ] 00:13:35.401 }' 00:13:35.401 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.401 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.661 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:35.661 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:35.661 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:35.661 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:35.661 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:35.920 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.920 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.920 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.920 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.920 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.920 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:35.920 "name": "raid_bdev1", 00:13:35.920 "uuid": "58a24e67-74bf-4378-8c82-82059e299ef0", 00:13:35.920 "strip_size_kb": 0, 00:13:35.920 "state": "online", 00:13:35.920 "raid_level": "raid1", 00:13:35.920 "superblock": true, 00:13:35.920 "num_base_bdevs": 4, 00:13:35.920 "num_base_bdevs_discovered": 2, 00:13:35.920 "num_base_bdevs_operational": 2, 00:13:35.920 "base_bdevs_list": [ 00:13:35.920 { 00:13:35.920 "name": null, 00:13:35.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.920 "is_configured": false, 00:13:35.920 "data_offset": 0, 00:13:35.920 "data_size": 63488 00:13:35.920 }, 00:13:35.920 { 00:13:35.920 "name": null, 00:13:35.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.921 "is_configured": false, 00:13:35.921 "data_offset": 2048, 00:13:35.921 "data_size": 63488 00:13:35.921 }, 00:13:35.921 { 00:13:35.921 "name": "BaseBdev3", 00:13:35.921 "uuid": "e72919fe-7b68-56b6-8b54-5172ad249e01", 00:13:35.921 "is_configured": true, 00:13:35.921 "data_offset": 2048, 00:13:35.921 "data_size": 63488 00:13:35.921 }, 00:13:35.921 { 00:13:35.921 "name": "BaseBdev4", 00:13:35.921 "uuid": "ac17915d-a0fb-5635-b615-a1ca670201cc", 00:13:35.921 "is_configured": true, 00:13:35.921 "data_offset": 2048, 00:13:35.921 "data_size": 63488 00:13:35.921 } 00:13:35.921 ] 00:13:35.921 }' 00:13:35.921 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:35.921 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:35.921 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:35.921 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:35.921 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:35.921 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.921 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.921 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.921 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:35.921 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.921 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.921 [2024-12-13 04:29:35.839378] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:35.921 [2024-12-13 04:29:35.839508] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.921 [2024-12-13 04:29:35.839551] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:13:35.921 [2024-12-13 04:29:35.839601] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.921 [2024-12-13 04:29:35.840060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.921 [2024-12-13 04:29:35.840120] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:35.921 [2024-12-13 04:29:35.840203] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:35.921 [2024-12-13 04:29:35.840222] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:35.921 [2024-12-13 04:29:35.840230] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:35.921 [2024-12-13 04:29:35.840242] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:35.921 BaseBdev1 00:13:35.921 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.921 04:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:36.861 04:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:36.861 04:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:36.861 04:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:36.861 04:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:36.861 04:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:36.861 04:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:36.861 04:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.861 04:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.861 04:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.861 04:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.861 04:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.861 04:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.861 04:29:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.861 04:29:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.121 04:29:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.121 04:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.121 "name": "raid_bdev1", 00:13:37.121 "uuid": "58a24e67-74bf-4378-8c82-82059e299ef0", 00:13:37.121 "strip_size_kb": 0, 00:13:37.121 "state": "online", 00:13:37.121 "raid_level": "raid1", 00:13:37.121 "superblock": true, 00:13:37.121 "num_base_bdevs": 4, 00:13:37.121 "num_base_bdevs_discovered": 2, 00:13:37.121 "num_base_bdevs_operational": 2, 00:13:37.121 "base_bdevs_list": [ 00:13:37.121 { 00:13:37.121 "name": null, 00:13:37.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.121 "is_configured": false, 00:13:37.121 "data_offset": 0, 00:13:37.121 "data_size": 63488 00:13:37.121 }, 00:13:37.121 { 00:13:37.121 "name": null, 00:13:37.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.121 "is_configured": false, 00:13:37.121 "data_offset": 2048, 00:13:37.121 "data_size": 63488 00:13:37.121 }, 00:13:37.121 { 00:13:37.121 "name": "BaseBdev3", 00:13:37.121 "uuid": "e72919fe-7b68-56b6-8b54-5172ad249e01", 00:13:37.121 "is_configured": true, 00:13:37.121 "data_offset": 2048, 00:13:37.121 "data_size": 63488 00:13:37.121 }, 00:13:37.121 { 00:13:37.121 "name": "BaseBdev4", 00:13:37.121 "uuid": "ac17915d-a0fb-5635-b615-a1ca670201cc", 00:13:37.121 "is_configured": true, 00:13:37.121 "data_offset": 2048, 00:13:37.121 "data_size": 63488 00:13:37.121 } 00:13:37.121 ] 00:13:37.121 }' 00:13:37.121 04:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.121 04:29:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.381 04:29:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:37.381 04:29:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.381 04:29:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:37.381 04:29:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:37.381 04:29:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.381 04:29:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.381 04:29:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.381 04:29:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.381 04:29:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.381 04:29:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.381 04:29:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.381 "name": "raid_bdev1", 00:13:37.381 "uuid": "58a24e67-74bf-4378-8c82-82059e299ef0", 00:13:37.381 "strip_size_kb": 0, 00:13:37.381 "state": "online", 00:13:37.381 "raid_level": "raid1", 00:13:37.381 "superblock": true, 00:13:37.381 "num_base_bdevs": 4, 00:13:37.381 "num_base_bdevs_discovered": 2, 00:13:37.381 "num_base_bdevs_operational": 2, 00:13:37.381 "base_bdevs_list": [ 00:13:37.381 { 00:13:37.381 "name": null, 00:13:37.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.381 "is_configured": false, 00:13:37.381 "data_offset": 0, 00:13:37.381 "data_size": 63488 00:13:37.381 }, 00:13:37.381 { 00:13:37.381 "name": null, 00:13:37.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.381 "is_configured": false, 00:13:37.381 "data_offset": 2048, 00:13:37.381 "data_size": 63488 00:13:37.381 }, 00:13:37.381 { 00:13:37.381 "name": "BaseBdev3", 00:13:37.381 "uuid": "e72919fe-7b68-56b6-8b54-5172ad249e01", 00:13:37.381 "is_configured": true, 00:13:37.381 "data_offset": 2048, 00:13:37.381 "data_size": 63488 00:13:37.381 }, 00:13:37.381 { 00:13:37.381 "name": "BaseBdev4", 00:13:37.381 "uuid": "ac17915d-a0fb-5635-b615-a1ca670201cc", 00:13:37.381 "is_configured": true, 00:13:37.381 "data_offset": 2048, 00:13:37.381 "data_size": 63488 00:13:37.381 } 00:13:37.381 ] 00:13:37.381 }' 00:13:37.381 04:29:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.641 04:29:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:37.641 04:29:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.641 04:29:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:37.641 04:29:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:37.641 04:29:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:13:37.641 04:29:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:37.641 04:29:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:37.641 04:29:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:37.641 04:29:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:37.641 04:29:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:37.641 04:29:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:37.641 04:29:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.641 04:29:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.641 [2024-12-13 04:29:37.464736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:37.641 [2024-12-13 04:29:37.464943] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:37.641 [2024-12-13 04:29:37.465008] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:37.641 request: 00:13:37.641 { 00:13:37.641 "base_bdev": "BaseBdev1", 00:13:37.641 "raid_bdev": "raid_bdev1", 00:13:37.641 "method": "bdev_raid_add_base_bdev", 00:13:37.641 "req_id": 1 00:13:37.641 } 00:13:37.641 Got JSON-RPC error response 00:13:37.641 response: 00:13:37.641 { 00:13:37.641 "code": -22, 00:13:37.641 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:37.641 } 00:13:37.641 04:29:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:37.641 04:29:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:13:37.641 04:29:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:37.641 04:29:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:37.641 04:29:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:37.641 04:29:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:38.581 04:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:38.581 04:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:38.581 04:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:38.581 04:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:38.581 04:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:38.581 04:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:38.581 04:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.581 04:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.581 04:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.581 04:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.581 04:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.581 04:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.581 04:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.581 04:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.581 04:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.581 04:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.581 "name": "raid_bdev1", 00:13:38.581 "uuid": "58a24e67-74bf-4378-8c82-82059e299ef0", 00:13:38.581 "strip_size_kb": 0, 00:13:38.581 "state": "online", 00:13:38.581 "raid_level": "raid1", 00:13:38.581 "superblock": true, 00:13:38.581 "num_base_bdevs": 4, 00:13:38.581 "num_base_bdevs_discovered": 2, 00:13:38.581 "num_base_bdevs_operational": 2, 00:13:38.581 "base_bdevs_list": [ 00:13:38.581 { 00:13:38.581 "name": null, 00:13:38.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.581 "is_configured": false, 00:13:38.581 "data_offset": 0, 00:13:38.581 "data_size": 63488 00:13:38.581 }, 00:13:38.581 { 00:13:38.581 "name": null, 00:13:38.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.581 "is_configured": false, 00:13:38.581 "data_offset": 2048, 00:13:38.581 "data_size": 63488 00:13:38.581 }, 00:13:38.581 { 00:13:38.581 "name": "BaseBdev3", 00:13:38.581 "uuid": "e72919fe-7b68-56b6-8b54-5172ad249e01", 00:13:38.581 "is_configured": true, 00:13:38.581 "data_offset": 2048, 00:13:38.581 "data_size": 63488 00:13:38.581 }, 00:13:38.581 { 00:13:38.581 "name": "BaseBdev4", 00:13:38.581 "uuid": "ac17915d-a0fb-5635-b615-a1ca670201cc", 00:13:38.581 "is_configured": true, 00:13:38.581 "data_offset": 2048, 00:13:38.581 "data_size": 63488 00:13:38.581 } 00:13:38.581 ] 00:13:38.581 }' 00:13:38.581 04:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.581 04:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.152 04:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:39.152 04:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.152 04:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:39.152 04:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:39.152 04:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.152 04:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.152 04:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.152 04:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.152 04:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.152 04:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.152 04:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.152 "name": "raid_bdev1", 00:13:39.152 "uuid": "58a24e67-74bf-4378-8c82-82059e299ef0", 00:13:39.152 "strip_size_kb": 0, 00:13:39.152 "state": "online", 00:13:39.152 "raid_level": "raid1", 00:13:39.152 "superblock": true, 00:13:39.152 "num_base_bdevs": 4, 00:13:39.152 "num_base_bdevs_discovered": 2, 00:13:39.152 "num_base_bdevs_operational": 2, 00:13:39.152 "base_bdevs_list": [ 00:13:39.152 { 00:13:39.152 "name": null, 00:13:39.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.152 "is_configured": false, 00:13:39.152 "data_offset": 0, 00:13:39.152 "data_size": 63488 00:13:39.152 }, 00:13:39.152 { 00:13:39.152 "name": null, 00:13:39.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.152 "is_configured": false, 00:13:39.152 "data_offset": 2048, 00:13:39.152 "data_size": 63488 00:13:39.152 }, 00:13:39.152 { 00:13:39.152 "name": "BaseBdev3", 00:13:39.152 "uuid": "e72919fe-7b68-56b6-8b54-5172ad249e01", 00:13:39.152 "is_configured": true, 00:13:39.152 "data_offset": 2048, 00:13:39.152 "data_size": 63488 00:13:39.152 }, 00:13:39.152 { 00:13:39.152 "name": "BaseBdev4", 00:13:39.152 "uuid": "ac17915d-a0fb-5635-b615-a1ca670201cc", 00:13:39.152 "is_configured": true, 00:13:39.152 "data_offset": 2048, 00:13:39.152 "data_size": 63488 00:13:39.152 } 00:13:39.152 ] 00:13:39.152 }' 00:13:39.152 04:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.152 04:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:39.152 04:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.152 04:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:39.152 04:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 91472 00:13:39.152 04:29:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 91472 ']' 00:13:39.152 04:29:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 91472 00:13:39.152 04:29:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:13:39.152 04:29:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:39.152 04:29:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91472 00:13:39.152 killing process with pid 91472 00:13:39.152 Received shutdown signal, test time was about 18.094267 seconds 00:13:39.152 00:13:39.152 Latency(us) 00:13:39.152 [2024-12-13T04:29:39.167Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:39.152 [2024-12-13T04:29:39.167Z] =================================================================================================================== 00:13:39.152 [2024-12-13T04:29:39.167Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:39.152 04:29:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:39.152 04:29:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:39.152 04:29:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91472' 00:13:39.152 04:29:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 91472 00:13:39.152 [2024-12-13 04:29:39.122689] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:39.152 [2024-12-13 04:29:39.122802] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:39.152 04:29:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 91472 00:13:39.152 [2024-12-13 04:29:39.122874] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:39.152 [2024-12-13 04:29:39.122885] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:13:39.412 [2024-12-13 04:29:39.208961] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:39.671 04:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:39.671 00:13:39.671 real 0m20.230s 00:13:39.671 user 0m26.746s 00:13:39.671 sys 0m2.783s 00:13:39.671 04:29:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:39.671 04:29:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.671 ************************************ 00:13:39.671 END TEST raid_rebuild_test_sb_io 00:13:39.671 ************************************ 00:13:39.671 04:29:39 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:13:39.671 04:29:39 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:13:39.671 04:29:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:39.671 04:29:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:39.671 04:29:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:39.671 ************************************ 00:13:39.671 START TEST raid5f_state_function_test 00:13:39.671 ************************************ 00:13:39.671 04:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:13:39.671 04:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:39.671 04:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:39.671 04:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:39.672 04:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:39.672 04:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:39.672 04:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:39.672 04:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:39.672 04:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:39.672 04:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:39.672 04:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:39.672 04:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:39.672 04:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:39.672 04:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:39.672 04:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:39.672 04:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:39.672 04:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:39.672 04:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:39.672 04:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:39.672 04:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:39.672 04:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:39.672 04:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:39.672 04:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:39.672 04:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:39.672 04:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:39.672 04:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:39.672 04:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:39.672 04:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=92183 00:13:39.672 04:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:39.672 04:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 92183' 00:13:39.672 Process raid pid: 92183 00:13:39.672 04:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 92183 00:13:39.672 04:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 92183 ']' 00:13:39.672 04:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.672 04:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:39.672 04:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.672 04:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:39.672 04:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.932 [2024-12-13 04:29:39.714554] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:13:39.932 [2024-12-13 04:29:39.714690] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:39.932 [2024-12-13 04:29:39.868229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.932 [2024-12-13 04:29:39.907201] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.192 [2024-12-13 04:29:39.984600] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:40.192 [2024-12-13 04:29:39.984635] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:40.761 04:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:40.761 04:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:13:40.761 04:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:40.761 04:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.761 04:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.761 [2024-12-13 04:29:40.547918] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:40.761 [2024-12-13 04:29:40.548046] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:40.761 [2024-12-13 04:29:40.548079] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:40.761 [2024-12-13 04:29:40.548102] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:40.761 [2024-12-13 04:29:40.548120] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:40.761 [2024-12-13 04:29:40.548160] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:40.761 04:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.761 04:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:40.761 04:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:40.761 04:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:40.761 04:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:40.761 04:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:40.761 04:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:40.761 04:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.761 04:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.761 04:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.761 04:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.761 04:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.761 04:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:40.761 04:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.761 04:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.761 04:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.761 04:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.761 "name": "Existed_Raid", 00:13:40.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.761 "strip_size_kb": 64, 00:13:40.761 "state": "configuring", 00:13:40.761 "raid_level": "raid5f", 00:13:40.761 "superblock": false, 00:13:40.761 "num_base_bdevs": 3, 00:13:40.761 "num_base_bdevs_discovered": 0, 00:13:40.761 "num_base_bdevs_operational": 3, 00:13:40.761 "base_bdevs_list": [ 00:13:40.761 { 00:13:40.761 "name": "BaseBdev1", 00:13:40.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.761 "is_configured": false, 00:13:40.761 "data_offset": 0, 00:13:40.761 "data_size": 0 00:13:40.761 }, 00:13:40.761 { 00:13:40.761 "name": "BaseBdev2", 00:13:40.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.761 "is_configured": false, 00:13:40.761 "data_offset": 0, 00:13:40.761 "data_size": 0 00:13:40.761 }, 00:13:40.761 { 00:13:40.761 "name": "BaseBdev3", 00:13:40.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.761 "is_configured": false, 00:13:40.761 "data_offset": 0, 00:13:40.761 "data_size": 0 00:13:40.761 } 00:13:40.761 ] 00:13:40.761 }' 00:13:40.761 04:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.761 04:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.021 04:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:41.021 04:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.021 04:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.021 [2024-12-13 04:29:41.007055] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:41.021 [2024-12-13 04:29:41.007156] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:13:41.021 04:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.021 04:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:41.021 04:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.021 04:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.021 [2024-12-13 04:29:41.019053] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:41.021 [2024-12-13 04:29:41.019131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:41.021 [2024-12-13 04:29:41.019171] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:41.021 [2024-12-13 04:29:41.019193] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:41.021 [2024-12-13 04:29:41.019210] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:41.021 [2024-12-13 04:29:41.019230] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:41.021 04:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.021 04:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:41.021 04:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.021 04:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.282 [2024-12-13 04:29:41.046157] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:41.282 BaseBdev1 00:13:41.282 04:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.282 04:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:41.282 04:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:41.282 04:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:41.282 04:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:41.282 04:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:41.282 04:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:41.282 04:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:41.282 04:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.282 04:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.282 04:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.282 04:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:41.282 04:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.282 04:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.282 [ 00:13:41.282 { 00:13:41.282 "name": "BaseBdev1", 00:13:41.282 "aliases": [ 00:13:41.282 "c2fea03e-de8a-4c4b-8f2f-78681503a0fd" 00:13:41.282 ], 00:13:41.282 "product_name": "Malloc disk", 00:13:41.282 "block_size": 512, 00:13:41.282 "num_blocks": 65536, 00:13:41.282 "uuid": "c2fea03e-de8a-4c4b-8f2f-78681503a0fd", 00:13:41.282 "assigned_rate_limits": { 00:13:41.282 "rw_ios_per_sec": 0, 00:13:41.282 "rw_mbytes_per_sec": 0, 00:13:41.282 "r_mbytes_per_sec": 0, 00:13:41.282 "w_mbytes_per_sec": 0 00:13:41.282 }, 00:13:41.282 "claimed": true, 00:13:41.282 "claim_type": "exclusive_write", 00:13:41.282 "zoned": false, 00:13:41.282 "supported_io_types": { 00:13:41.282 "read": true, 00:13:41.282 "write": true, 00:13:41.282 "unmap": true, 00:13:41.282 "flush": true, 00:13:41.282 "reset": true, 00:13:41.282 "nvme_admin": false, 00:13:41.282 "nvme_io": false, 00:13:41.282 "nvme_io_md": false, 00:13:41.282 "write_zeroes": true, 00:13:41.282 "zcopy": true, 00:13:41.282 "get_zone_info": false, 00:13:41.282 "zone_management": false, 00:13:41.282 "zone_append": false, 00:13:41.282 "compare": false, 00:13:41.282 "compare_and_write": false, 00:13:41.282 "abort": true, 00:13:41.282 "seek_hole": false, 00:13:41.282 "seek_data": false, 00:13:41.282 "copy": true, 00:13:41.282 "nvme_iov_md": false 00:13:41.282 }, 00:13:41.282 "memory_domains": [ 00:13:41.282 { 00:13:41.282 "dma_device_id": "system", 00:13:41.282 "dma_device_type": 1 00:13:41.282 }, 00:13:41.282 { 00:13:41.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.282 "dma_device_type": 2 00:13:41.282 } 00:13:41.282 ], 00:13:41.282 "driver_specific": {} 00:13:41.282 } 00:13:41.282 ] 00:13:41.282 04:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.282 04:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:41.282 04:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:41.282 04:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:41.282 04:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:41.282 04:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:41.282 04:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:41.282 04:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:41.282 04:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.282 04:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.282 04:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.282 04:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.282 04:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.282 04:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:41.282 04:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.282 04:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.282 04:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.282 04:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.282 "name": "Existed_Raid", 00:13:41.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.282 "strip_size_kb": 64, 00:13:41.282 "state": "configuring", 00:13:41.282 "raid_level": "raid5f", 00:13:41.282 "superblock": false, 00:13:41.282 "num_base_bdevs": 3, 00:13:41.282 "num_base_bdevs_discovered": 1, 00:13:41.282 "num_base_bdevs_operational": 3, 00:13:41.282 "base_bdevs_list": [ 00:13:41.282 { 00:13:41.282 "name": "BaseBdev1", 00:13:41.282 "uuid": "c2fea03e-de8a-4c4b-8f2f-78681503a0fd", 00:13:41.282 "is_configured": true, 00:13:41.282 "data_offset": 0, 00:13:41.282 "data_size": 65536 00:13:41.282 }, 00:13:41.282 { 00:13:41.282 "name": "BaseBdev2", 00:13:41.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.282 "is_configured": false, 00:13:41.282 "data_offset": 0, 00:13:41.282 "data_size": 0 00:13:41.282 }, 00:13:41.282 { 00:13:41.282 "name": "BaseBdev3", 00:13:41.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.282 "is_configured": false, 00:13:41.282 "data_offset": 0, 00:13:41.282 "data_size": 0 00:13:41.282 } 00:13:41.282 ] 00:13:41.282 }' 00:13:41.282 04:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.282 04:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.542 04:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:41.542 04:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.542 04:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.542 [2024-12-13 04:29:41.557309] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:41.542 [2024-12-13 04:29:41.557398] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:13:41.802 04:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.802 04:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:41.802 04:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.802 04:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.802 [2024-12-13 04:29:41.569318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:41.802 [2024-12-13 04:29:41.571488] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:41.802 [2024-12-13 04:29:41.571557] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:41.802 [2024-12-13 04:29:41.571598] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:41.802 [2024-12-13 04:29:41.571622] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:41.802 04:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.802 04:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:41.802 04:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:41.802 04:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:41.802 04:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:41.802 04:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:41.802 04:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:41.802 04:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:41.802 04:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:41.802 04:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.802 04:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.802 04:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.802 04:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.802 04:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.802 04:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:41.802 04:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.802 04:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.802 04:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.802 04:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.802 "name": "Existed_Raid", 00:13:41.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.802 "strip_size_kb": 64, 00:13:41.802 "state": "configuring", 00:13:41.802 "raid_level": "raid5f", 00:13:41.802 "superblock": false, 00:13:41.802 "num_base_bdevs": 3, 00:13:41.802 "num_base_bdevs_discovered": 1, 00:13:41.802 "num_base_bdevs_operational": 3, 00:13:41.802 "base_bdevs_list": [ 00:13:41.802 { 00:13:41.802 "name": "BaseBdev1", 00:13:41.802 "uuid": "c2fea03e-de8a-4c4b-8f2f-78681503a0fd", 00:13:41.802 "is_configured": true, 00:13:41.802 "data_offset": 0, 00:13:41.802 "data_size": 65536 00:13:41.802 }, 00:13:41.802 { 00:13:41.802 "name": "BaseBdev2", 00:13:41.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.802 "is_configured": false, 00:13:41.802 "data_offset": 0, 00:13:41.802 "data_size": 0 00:13:41.802 }, 00:13:41.802 { 00:13:41.802 "name": "BaseBdev3", 00:13:41.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.802 "is_configured": false, 00:13:41.802 "data_offset": 0, 00:13:41.802 "data_size": 0 00:13:41.802 } 00:13:41.802 ] 00:13:41.802 }' 00:13:41.802 04:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.802 04:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.063 04:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:42.063 04:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.063 04:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.063 [2024-12-13 04:29:42.013246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:42.063 BaseBdev2 00:13:42.063 04:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.063 04:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:42.063 04:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:42.063 04:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:42.063 04:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:42.063 04:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:42.063 04:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:42.063 04:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:42.063 04:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.063 04:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.063 04:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.063 04:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:42.063 04:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.063 04:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.063 [ 00:13:42.063 { 00:13:42.063 "name": "BaseBdev2", 00:13:42.063 "aliases": [ 00:13:42.063 "3b0d8375-3070-445b-a270-865b00a1a718" 00:13:42.063 ], 00:13:42.063 "product_name": "Malloc disk", 00:13:42.063 "block_size": 512, 00:13:42.063 "num_blocks": 65536, 00:13:42.063 "uuid": "3b0d8375-3070-445b-a270-865b00a1a718", 00:13:42.063 "assigned_rate_limits": { 00:13:42.063 "rw_ios_per_sec": 0, 00:13:42.063 "rw_mbytes_per_sec": 0, 00:13:42.063 "r_mbytes_per_sec": 0, 00:13:42.063 "w_mbytes_per_sec": 0 00:13:42.063 }, 00:13:42.063 "claimed": true, 00:13:42.063 "claim_type": "exclusive_write", 00:13:42.063 "zoned": false, 00:13:42.063 "supported_io_types": { 00:13:42.063 "read": true, 00:13:42.063 "write": true, 00:13:42.063 "unmap": true, 00:13:42.063 "flush": true, 00:13:42.063 "reset": true, 00:13:42.063 "nvme_admin": false, 00:13:42.063 "nvme_io": false, 00:13:42.063 "nvme_io_md": false, 00:13:42.063 "write_zeroes": true, 00:13:42.063 "zcopy": true, 00:13:42.063 "get_zone_info": false, 00:13:42.063 "zone_management": false, 00:13:42.063 "zone_append": false, 00:13:42.063 "compare": false, 00:13:42.063 "compare_and_write": false, 00:13:42.063 "abort": true, 00:13:42.063 "seek_hole": false, 00:13:42.063 "seek_data": false, 00:13:42.063 "copy": true, 00:13:42.063 "nvme_iov_md": false 00:13:42.063 }, 00:13:42.063 "memory_domains": [ 00:13:42.063 { 00:13:42.063 "dma_device_id": "system", 00:13:42.063 "dma_device_type": 1 00:13:42.063 }, 00:13:42.063 { 00:13:42.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.063 "dma_device_type": 2 00:13:42.063 } 00:13:42.063 ], 00:13:42.063 "driver_specific": {} 00:13:42.063 } 00:13:42.063 ] 00:13:42.063 04:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.063 04:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:42.063 04:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:42.063 04:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:42.063 04:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:42.063 04:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:42.063 04:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:42.063 04:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:42.063 04:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:42.063 04:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:42.063 04:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.063 04:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.063 04:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.063 04:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.063 04:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.063 04:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:42.063 04:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.063 04:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.323 04:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.323 04:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.323 "name": "Existed_Raid", 00:13:42.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.323 "strip_size_kb": 64, 00:13:42.323 "state": "configuring", 00:13:42.323 "raid_level": "raid5f", 00:13:42.323 "superblock": false, 00:13:42.323 "num_base_bdevs": 3, 00:13:42.323 "num_base_bdevs_discovered": 2, 00:13:42.323 "num_base_bdevs_operational": 3, 00:13:42.323 "base_bdevs_list": [ 00:13:42.323 { 00:13:42.323 "name": "BaseBdev1", 00:13:42.323 "uuid": "c2fea03e-de8a-4c4b-8f2f-78681503a0fd", 00:13:42.323 "is_configured": true, 00:13:42.323 "data_offset": 0, 00:13:42.323 "data_size": 65536 00:13:42.323 }, 00:13:42.323 { 00:13:42.323 "name": "BaseBdev2", 00:13:42.323 "uuid": "3b0d8375-3070-445b-a270-865b00a1a718", 00:13:42.323 "is_configured": true, 00:13:42.323 "data_offset": 0, 00:13:42.323 "data_size": 65536 00:13:42.323 }, 00:13:42.323 { 00:13:42.323 "name": "BaseBdev3", 00:13:42.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.323 "is_configured": false, 00:13:42.323 "data_offset": 0, 00:13:42.323 "data_size": 0 00:13:42.323 } 00:13:42.323 ] 00:13:42.323 }' 00:13:42.323 04:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.323 04:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.583 04:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:42.583 04:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.583 04:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.584 [2024-12-13 04:29:42.552939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:42.584 [2024-12-13 04:29:42.553223] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:13:42.584 [2024-12-13 04:29:42.553354] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:42.584 [2024-12-13 04:29:42.554501] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:13:42.584 [2024-12-13 04:29:42.556147] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:13:42.584 [2024-12-13 04:29:42.556222] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:13:42.584 BaseBdev3 00:13:42.584 [2024-12-13 04:29:42.556951] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.584 04:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.584 04:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:42.584 04:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:42.584 04:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:42.584 04:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:42.584 04:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:42.584 04:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:42.584 04:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:42.584 04:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.584 04:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.584 04:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.584 04:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:42.584 04:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.584 04:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.584 [ 00:13:42.584 { 00:13:42.584 "name": "BaseBdev3", 00:13:42.584 "aliases": [ 00:13:42.584 "9e656465-d18f-470a-9ad4-d6aa385983f6" 00:13:42.584 ], 00:13:42.584 "product_name": "Malloc disk", 00:13:42.584 "block_size": 512, 00:13:42.584 "num_blocks": 65536, 00:13:42.584 "uuid": "9e656465-d18f-470a-9ad4-d6aa385983f6", 00:13:42.584 "assigned_rate_limits": { 00:13:42.584 "rw_ios_per_sec": 0, 00:13:42.584 "rw_mbytes_per_sec": 0, 00:13:42.584 "r_mbytes_per_sec": 0, 00:13:42.584 "w_mbytes_per_sec": 0 00:13:42.584 }, 00:13:42.584 "claimed": true, 00:13:42.584 "claim_type": "exclusive_write", 00:13:42.584 "zoned": false, 00:13:42.584 "supported_io_types": { 00:13:42.584 "read": true, 00:13:42.584 "write": true, 00:13:42.584 "unmap": true, 00:13:42.584 "flush": true, 00:13:42.584 "reset": true, 00:13:42.584 "nvme_admin": false, 00:13:42.584 "nvme_io": false, 00:13:42.584 "nvme_io_md": false, 00:13:42.584 "write_zeroes": true, 00:13:42.584 "zcopy": true, 00:13:42.584 "get_zone_info": false, 00:13:42.584 "zone_management": false, 00:13:42.584 "zone_append": false, 00:13:42.584 "compare": false, 00:13:42.584 "compare_and_write": false, 00:13:42.584 "abort": true, 00:13:42.584 "seek_hole": false, 00:13:42.584 "seek_data": false, 00:13:42.584 "copy": true, 00:13:42.584 "nvme_iov_md": false 00:13:42.584 }, 00:13:42.584 "memory_domains": [ 00:13:42.584 { 00:13:42.584 "dma_device_id": "system", 00:13:42.584 "dma_device_type": 1 00:13:42.584 }, 00:13:42.584 { 00:13:42.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.584 "dma_device_type": 2 00:13:42.584 } 00:13:42.584 ], 00:13:42.584 "driver_specific": {} 00:13:42.584 } 00:13:42.584 ] 00:13:42.584 04:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.584 04:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:42.584 04:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:42.584 04:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:42.584 04:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:42.584 04:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:42.584 04:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.584 04:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:42.584 04:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:42.584 04:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:42.584 04:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.584 04:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.584 04:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.584 04:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.844 04:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.844 04:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:42.844 04:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.844 04:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.844 04:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.844 04:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.844 "name": "Existed_Raid", 00:13:42.844 "uuid": "1a540277-ac1d-4e9b-ae95-b89b157558d7", 00:13:42.844 "strip_size_kb": 64, 00:13:42.844 "state": "online", 00:13:42.844 "raid_level": "raid5f", 00:13:42.844 "superblock": false, 00:13:42.844 "num_base_bdevs": 3, 00:13:42.844 "num_base_bdevs_discovered": 3, 00:13:42.844 "num_base_bdevs_operational": 3, 00:13:42.844 "base_bdevs_list": [ 00:13:42.844 { 00:13:42.844 "name": "BaseBdev1", 00:13:42.844 "uuid": "c2fea03e-de8a-4c4b-8f2f-78681503a0fd", 00:13:42.844 "is_configured": true, 00:13:42.844 "data_offset": 0, 00:13:42.844 "data_size": 65536 00:13:42.844 }, 00:13:42.844 { 00:13:42.844 "name": "BaseBdev2", 00:13:42.844 "uuid": "3b0d8375-3070-445b-a270-865b00a1a718", 00:13:42.844 "is_configured": true, 00:13:42.844 "data_offset": 0, 00:13:42.844 "data_size": 65536 00:13:42.844 }, 00:13:42.844 { 00:13:42.844 "name": "BaseBdev3", 00:13:42.844 "uuid": "9e656465-d18f-470a-9ad4-d6aa385983f6", 00:13:42.844 "is_configured": true, 00:13:42.844 "data_offset": 0, 00:13:42.844 "data_size": 65536 00:13:42.844 } 00:13:42.844 ] 00:13:42.844 }' 00:13:42.844 04:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.844 04:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.104 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:43.104 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:43.104 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:43.104 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:43.104 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:43.104 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:43.104 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:43.104 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:43.104 04:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.104 04:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.104 [2024-12-13 04:29:43.068573] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:43.104 04:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.104 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:43.104 "name": "Existed_Raid", 00:13:43.104 "aliases": [ 00:13:43.104 "1a540277-ac1d-4e9b-ae95-b89b157558d7" 00:13:43.104 ], 00:13:43.104 "product_name": "Raid Volume", 00:13:43.104 "block_size": 512, 00:13:43.104 "num_blocks": 131072, 00:13:43.104 "uuid": "1a540277-ac1d-4e9b-ae95-b89b157558d7", 00:13:43.104 "assigned_rate_limits": { 00:13:43.104 "rw_ios_per_sec": 0, 00:13:43.104 "rw_mbytes_per_sec": 0, 00:13:43.104 "r_mbytes_per_sec": 0, 00:13:43.104 "w_mbytes_per_sec": 0 00:13:43.104 }, 00:13:43.104 "claimed": false, 00:13:43.104 "zoned": false, 00:13:43.104 "supported_io_types": { 00:13:43.104 "read": true, 00:13:43.104 "write": true, 00:13:43.104 "unmap": false, 00:13:43.104 "flush": false, 00:13:43.104 "reset": true, 00:13:43.104 "nvme_admin": false, 00:13:43.104 "nvme_io": false, 00:13:43.104 "nvme_io_md": false, 00:13:43.104 "write_zeroes": true, 00:13:43.104 "zcopy": false, 00:13:43.104 "get_zone_info": false, 00:13:43.104 "zone_management": false, 00:13:43.104 "zone_append": false, 00:13:43.104 "compare": false, 00:13:43.104 "compare_and_write": false, 00:13:43.104 "abort": false, 00:13:43.104 "seek_hole": false, 00:13:43.104 "seek_data": false, 00:13:43.104 "copy": false, 00:13:43.104 "nvme_iov_md": false 00:13:43.104 }, 00:13:43.104 "driver_specific": { 00:13:43.104 "raid": { 00:13:43.104 "uuid": "1a540277-ac1d-4e9b-ae95-b89b157558d7", 00:13:43.104 "strip_size_kb": 64, 00:13:43.104 "state": "online", 00:13:43.104 "raid_level": "raid5f", 00:13:43.104 "superblock": false, 00:13:43.104 "num_base_bdevs": 3, 00:13:43.104 "num_base_bdevs_discovered": 3, 00:13:43.104 "num_base_bdevs_operational": 3, 00:13:43.104 "base_bdevs_list": [ 00:13:43.104 { 00:13:43.104 "name": "BaseBdev1", 00:13:43.104 "uuid": "c2fea03e-de8a-4c4b-8f2f-78681503a0fd", 00:13:43.104 "is_configured": true, 00:13:43.104 "data_offset": 0, 00:13:43.104 "data_size": 65536 00:13:43.104 }, 00:13:43.104 { 00:13:43.104 "name": "BaseBdev2", 00:13:43.104 "uuid": "3b0d8375-3070-445b-a270-865b00a1a718", 00:13:43.104 "is_configured": true, 00:13:43.104 "data_offset": 0, 00:13:43.104 "data_size": 65536 00:13:43.104 }, 00:13:43.104 { 00:13:43.104 "name": "BaseBdev3", 00:13:43.104 "uuid": "9e656465-d18f-470a-9ad4-d6aa385983f6", 00:13:43.104 "is_configured": true, 00:13:43.104 "data_offset": 0, 00:13:43.104 "data_size": 65536 00:13:43.104 } 00:13:43.105 ] 00:13:43.105 } 00:13:43.105 } 00:13:43.105 }' 00:13:43.105 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:43.365 BaseBdev2 00:13:43.365 BaseBdev3' 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.365 [2024-12-13 04:29:43.323951] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.365 04:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.626 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.626 "name": "Existed_Raid", 00:13:43.626 "uuid": "1a540277-ac1d-4e9b-ae95-b89b157558d7", 00:13:43.626 "strip_size_kb": 64, 00:13:43.626 "state": "online", 00:13:43.626 "raid_level": "raid5f", 00:13:43.626 "superblock": false, 00:13:43.626 "num_base_bdevs": 3, 00:13:43.626 "num_base_bdevs_discovered": 2, 00:13:43.626 "num_base_bdevs_operational": 2, 00:13:43.626 "base_bdevs_list": [ 00:13:43.626 { 00:13:43.626 "name": null, 00:13:43.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.626 "is_configured": false, 00:13:43.626 "data_offset": 0, 00:13:43.626 "data_size": 65536 00:13:43.626 }, 00:13:43.626 { 00:13:43.626 "name": "BaseBdev2", 00:13:43.626 "uuid": "3b0d8375-3070-445b-a270-865b00a1a718", 00:13:43.626 "is_configured": true, 00:13:43.626 "data_offset": 0, 00:13:43.626 "data_size": 65536 00:13:43.626 }, 00:13:43.626 { 00:13:43.626 "name": "BaseBdev3", 00:13:43.626 "uuid": "9e656465-d18f-470a-9ad4-d6aa385983f6", 00:13:43.626 "is_configured": true, 00:13:43.626 "data_offset": 0, 00:13:43.626 "data_size": 65536 00:13:43.626 } 00:13:43.626 ] 00:13:43.626 }' 00:13:43.626 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.626 04:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.886 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:43.886 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:43.886 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.886 04:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.886 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:43.886 04:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.886 04:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.886 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:43.886 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:43.886 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:43.886 04:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.886 04:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.886 [2024-12-13 04:29:43.840559] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:43.886 [2024-12-13 04:29:43.840720] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:43.886 [2024-12-13 04:29:43.861233] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:43.886 04:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.886 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:43.886 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:43.886 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.886 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:43.886 04:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.886 04:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.886 04:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.146 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:44.146 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:44.146 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:44.146 04:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.146 04:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.146 [2024-12-13 04:29:43.921158] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:44.146 [2024-12-13 04:29:43.921256] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:13:44.146 04:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.146 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:44.146 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:44.146 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.146 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:44.146 04:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.146 04:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.146 04:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.146 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:44.146 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:44.146 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:44.146 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:44.146 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:44.146 04:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:44.146 04:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.146 04:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.146 BaseBdev2 00:13:44.146 04:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.146 04:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:44.146 04:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:44.146 04:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:44.146 04:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:44.146 04:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:44.146 04:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:44.146 04:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:44.146 04:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.146 04:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.146 04:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.146 04:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:44.146 04:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.146 04:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.146 [ 00:13:44.146 { 00:13:44.146 "name": "BaseBdev2", 00:13:44.146 "aliases": [ 00:13:44.146 "5f96b5a8-1173-4ebd-9c20-b82b267c0a6d" 00:13:44.146 ], 00:13:44.146 "product_name": "Malloc disk", 00:13:44.146 "block_size": 512, 00:13:44.146 "num_blocks": 65536, 00:13:44.146 "uuid": "5f96b5a8-1173-4ebd-9c20-b82b267c0a6d", 00:13:44.146 "assigned_rate_limits": { 00:13:44.146 "rw_ios_per_sec": 0, 00:13:44.146 "rw_mbytes_per_sec": 0, 00:13:44.146 "r_mbytes_per_sec": 0, 00:13:44.147 "w_mbytes_per_sec": 0 00:13:44.147 }, 00:13:44.147 "claimed": false, 00:13:44.147 "zoned": false, 00:13:44.147 "supported_io_types": { 00:13:44.147 "read": true, 00:13:44.147 "write": true, 00:13:44.147 "unmap": true, 00:13:44.147 "flush": true, 00:13:44.147 "reset": true, 00:13:44.147 "nvme_admin": false, 00:13:44.147 "nvme_io": false, 00:13:44.147 "nvme_io_md": false, 00:13:44.147 "write_zeroes": true, 00:13:44.147 "zcopy": true, 00:13:44.147 "get_zone_info": false, 00:13:44.147 "zone_management": false, 00:13:44.147 "zone_append": false, 00:13:44.147 "compare": false, 00:13:44.147 "compare_and_write": false, 00:13:44.147 "abort": true, 00:13:44.147 "seek_hole": false, 00:13:44.147 "seek_data": false, 00:13:44.147 "copy": true, 00:13:44.147 "nvme_iov_md": false 00:13:44.147 }, 00:13:44.147 "memory_domains": [ 00:13:44.147 { 00:13:44.147 "dma_device_id": "system", 00:13:44.147 "dma_device_type": 1 00:13:44.147 }, 00:13:44.147 { 00:13:44.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.147 "dma_device_type": 2 00:13:44.147 } 00:13:44.147 ], 00:13:44.147 "driver_specific": {} 00:13:44.147 } 00:13:44.147 ] 00:13:44.147 04:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.147 04:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:44.147 04:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:44.147 04:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:44.147 04:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:44.147 04:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.147 04:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.147 BaseBdev3 00:13:44.147 04:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.147 04:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:44.147 04:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:44.147 04:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:44.147 04:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:44.147 04:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:44.147 04:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:44.147 04:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:44.147 04:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.147 04:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.147 04:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.147 04:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:44.147 04:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.147 04:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.147 [ 00:13:44.147 { 00:13:44.147 "name": "BaseBdev3", 00:13:44.147 "aliases": [ 00:13:44.147 "e1363a34-a69a-4207-90a2-34bc2036e004" 00:13:44.147 ], 00:13:44.147 "product_name": "Malloc disk", 00:13:44.147 "block_size": 512, 00:13:44.147 "num_blocks": 65536, 00:13:44.147 "uuid": "e1363a34-a69a-4207-90a2-34bc2036e004", 00:13:44.147 "assigned_rate_limits": { 00:13:44.147 "rw_ios_per_sec": 0, 00:13:44.147 "rw_mbytes_per_sec": 0, 00:13:44.147 "r_mbytes_per_sec": 0, 00:13:44.147 "w_mbytes_per_sec": 0 00:13:44.147 }, 00:13:44.147 "claimed": false, 00:13:44.147 "zoned": false, 00:13:44.147 "supported_io_types": { 00:13:44.147 "read": true, 00:13:44.147 "write": true, 00:13:44.147 "unmap": true, 00:13:44.147 "flush": true, 00:13:44.147 "reset": true, 00:13:44.147 "nvme_admin": false, 00:13:44.147 "nvme_io": false, 00:13:44.147 "nvme_io_md": false, 00:13:44.147 "write_zeroes": true, 00:13:44.147 "zcopy": true, 00:13:44.147 "get_zone_info": false, 00:13:44.147 "zone_management": false, 00:13:44.147 "zone_append": false, 00:13:44.147 "compare": false, 00:13:44.147 "compare_and_write": false, 00:13:44.147 "abort": true, 00:13:44.147 "seek_hole": false, 00:13:44.147 "seek_data": false, 00:13:44.147 "copy": true, 00:13:44.147 "nvme_iov_md": false 00:13:44.147 }, 00:13:44.147 "memory_domains": [ 00:13:44.147 { 00:13:44.147 "dma_device_id": "system", 00:13:44.147 "dma_device_type": 1 00:13:44.147 }, 00:13:44.147 { 00:13:44.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.147 "dma_device_type": 2 00:13:44.147 } 00:13:44.147 ], 00:13:44.147 "driver_specific": {} 00:13:44.147 } 00:13:44.147 ] 00:13:44.147 04:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.147 04:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:44.147 04:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:44.147 04:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:44.147 04:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:44.147 04:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.147 04:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.147 [2024-12-13 04:29:44.112905] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:44.147 [2024-12-13 04:29:44.113039] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:44.147 [2024-12-13 04:29:44.113081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:44.147 [2024-12-13 04:29:44.115230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:44.147 04:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.147 04:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:44.147 04:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:44.147 04:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:44.147 04:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:44.147 04:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:44.147 04:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:44.147 04:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.147 04:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.147 04:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.147 04:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.147 04:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.147 04:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.147 04:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.147 04:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.147 04:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.407 04:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.407 "name": "Existed_Raid", 00:13:44.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.407 "strip_size_kb": 64, 00:13:44.407 "state": "configuring", 00:13:44.407 "raid_level": "raid5f", 00:13:44.407 "superblock": false, 00:13:44.407 "num_base_bdevs": 3, 00:13:44.407 "num_base_bdevs_discovered": 2, 00:13:44.407 "num_base_bdevs_operational": 3, 00:13:44.407 "base_bdevs_list": [ 00:13:44.407 { 00:13:44.407 "name": "BaseBdev1", 00:13:44.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.407 "is_configured": false, 00:13:44.407 "data_offset": 0, 00:13:44.407 "data_size": 0 00:13:44.407 }, 00:13:44.407 { 00:13:44.407 "name": "BaseBdev2", 00:13:44.407 "uuid": "5f96b5a8-1173-4ebd-9c20-b82b267c0a6d", 00:13:44.407 "is_configured": true, 00:13:44.407 "data_offset": 0, 00:13:44.407 "data_size": 65536 00:13:44.407 }, 00:13:44.407 { 00:13:44.407 "name": "BaseBdev3", 00:13:44.407 "uuid": "e1363a34-a69a-4207-90a2-34bc2036e004", 00:13:44.407 "is_configured": true, 00:13:44.407 "data_offset": 0, 00:13:44.407 "data_size": 65536 00:13:44.407 } 00:13:44.407 ] 00:13:44.407 }' 00:13:44.407 04:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.407 04:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.667 04:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:44.667 04:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.667 04:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.667 [2024-12-13 04:29:44.576269] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:44.667 04:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.667 04:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:44.667 04:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:44.667 04:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:44.667 04:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:44.667 04:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:44.667 04:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:44.667 04:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.667 04:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.667 04:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.667 04:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.667 04:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.667 04:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.667 04:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.667 04:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.667 04:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.667 04:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.667 "name": "Existed_Raid", 00:13:44.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.667 "strip_size_kb": 64, 00:13:44.667 "state": "configuring", 00:13:44.667 "raid_level": "raid5f", 00:13:44.667 "superblock": false, 00:13:44.667 "num_base_bdevs": 3, 00:13:44.667 "num_base_bdevs_discovered": 1, 00:13:44.667 "num_base_bdevs_operational": 3, 00:13:44.667 "base_bdevs_list": [ 00:13:44.667 { 00:13:44.667 "name": "BaseBdev1", 00:13:44.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.667 "is_configured": false, 00:13:44.667 "data_offset": 0, 00:13:44.667 "data_size": 0 00:13:44.667 }, 00:13:44.667 { 00:13:44.667 "name": null, 00:13:44.667 "uuid": "5f96b5a8-1173-4ebd-9c20-b82b267c0a6d", 00:13:44.667 "is_configured": false, 00:13:44.667 "data_offset": 0, 00:13:44.667 "data_size": 65536 00:13:44.667 }, 00:13:44.667 { 00:13:44.667 "name": "BaseBdev3", 00:13:44.667 "uuid": "e1363a34-a69a-4207-90a2-34bc2036e004", 00:13:44.667 "is_configured": true, 00:13:44.667 "data_offset": 0, 00:13:44.667 "data_size": 65536 00:13:44.667 } 00:13:44.667 ] 00:13:44.667 }' 00:13:44.667 04:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.667 04:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.237 04:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.238 04:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.238 04:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.238 04:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:45.238 04:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.238 04:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:45.238 04:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:45.238 04:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.238 04:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.238 [2024-12-13 04:29:45.136208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:45.238 BaseBdev1 00:13:45.238 04:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.238 04:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:45.238 04:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:45.238 04:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:45.238 04:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:45.238 04:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:45.238 04:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:45.238 04:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:45.238 04:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.238 04:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.238 04:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.238 04:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:45.238 04:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.238 04:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.238 [ 00:13:45.238 { 00:13:45.238 "name": "BaseBdev1", 00:13:45.238 "aliases": [ 00:13:45.238 "8a1bed06-7cc1-4719-9695-1968288a2d86" 00:13:45.238 ], 00:13:45.238 "product_name": "Malloc disk", 00:13:45.238 "block_size": 512, 00:13:45.238 "num_blocks": 65536, 00:13:45.238 "uuid": "8a1bed06-7cc1-4719-9695-1968288a2d86", 00:13:45.238 "assigned_rate_limits": { 00:13:45.238 "rw_ios_per_sec": 0, 00:13:45.238 "rw_mbytes_per_sec": 0, 00:13:45.238 "r_mbytes_per_sec": 0, 00:13:45.238 "w_mbytes_per_sec": 0 00:13:45.238 }, 00:13:45.238 "claimed": true, 00:13:45.238 "claim_type": "exclusive_write", 00:13:45.238 "zoned": false, 00:13:45.238 "supported_io_types": { 00:13:45.238 "read": true, 00:13:45.238 "write": true, 00:13:45.238 "unmap": true, 00:13:45.238 "flush": true, 00:13:45.238 "reset": true, 00:13:45.238 "nvme_admin": false, 00:13:45.238 "nvme_io": false, 00:13:45.238 "nvme_io_md": false, 00:13:45.238 "write_zeroes": true, 00:13:45.238 "zcopy": true, 00:13:45.238 "get_zone_info": false, 00:13:45.238 "zone_management": false, 00:13:45.238 "zone_append": false, 00:13:45.238 "compare": false, 00:13:45.238 "compare_and_write": false, 00:13:45.238 "abort": true, 00:13:45.238 "seek_hole": false, 00:13:45.238 "seek_data": false, 00:13:45.238 "copy": true, 00:13:45.238 "nvme_iov_md": false 00:13:45.238 }, 00:13:45.238 "memory_domains": [ 00:13:45.238 { 00:13:45.238 "dma_device_id": "system", 00:13:45.238 "dma_device_type": 1 00:13:45.238 }, 00:13:45.238 { 00:13:45.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.238 "dma_device_type": 2 00:13:45.238 } 00:13:45.238 ], 00:13:45.238 "driver_specific": {} 00:13:45.238 } 00:13:45.238 ] 00:13:45.238 04:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.238 04:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:45.238 04:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:45.238 04:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:45.238 04:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:45.238 04:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:45.238 04:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:45.238 04:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:45.238 04:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.238 04:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.238 04:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.238 04:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.238 04:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.238 04:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.238 04:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.238 04:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.238 04:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.238 04:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.238 "name": "Existed_Raid", 00:13:45.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.238 "strip_size_kb": 64, 00:13:45.238 "state": "configuring", 00:13:45.238 "raid_level": "raid5f", 00:13:45.238 "superblock": false, 00:13:45.238 "num_base_bdevs": 3, 00:13:45.238 "num_base_bdevs_discovered": 2, 00:13:45.238 "num_base_bdevs_operational": 3, 00:13:45.238 "base_bdevs_list": [ 00:13:45.238 { 00:13:45.238 "name": "BaseBdev1", 00:13:45.238 "uuid": "8a1bed06-7cc1-4719-9695-1968288a2d86", 00:13:45.238 "is_configured": true, 00:13:45.238 "data_offset": 0, 00:13:45.238 "data_size": 65536 00:13:45.238 }, 00:13:45.238 { 00:13:45.238 "name": null, 00:13:45.238 "uuid": "5f96b5a8-1173-4ebd-9c20-b82b267c0a6d", 00:13:45.238 "is_configured": false, 00:13:45.238 "data_offset": 0, 00:13:45.238 "data_size": 65536 00:13:45.238 }, 00:13:45.238 { 00:13:45.238 "name": "BaseBdev3", 00:13:45.238 "uuid": "e1363a34-a69a-4207-90a2-34bc2036e004", 00:13:45.238 "is_configured": true, 00:13:45.238 "data_offset": 0, 00:13:45.238 "data_size": 65536 00:13:45.238 } 00:13:45.238 ] 00:13:45.238 }' 00:13:45.238 04:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.238 04:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.807 04:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.807 04:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.807 04:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.807 04:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:45.807 04:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.807 04:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:45.807 04:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:45.807 04:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.807 04:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.807 [2024-12-13 04:29:45.691340] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:45.807 04:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.807 04:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:45.807 04:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:45.807 04:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:45.807 04:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:45.807 04:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:45.807 04:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:45.807 04:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.807 04:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.807 04:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.807 04:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.807 04:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.807 04:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.807 04:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.807 04:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.807 04:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.807 04:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.807 "name": "Existed_Raid", 00:13:45.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.807 "strip_size_kb": 64, 00:13:45.807 "state": "configuring", 00:13:45.807 "raid_level": "raid5f", 00:13:45.807 "superblock": false, 00:13:45.807 "num_base_bdevs": 3, 00:13:45.807 "num_base_bdevs_discovered": 1, 00:13:45.807 "num_base_bdevs_operational": 3, 00:13:45.807 "base_bdevs_list": [ 00:13:45.807 { 00:13:45.807 "name": "BaseBdev1", 00:13:45.807 "uuid": "8a1bed06-7cc1-4719-9695-1968288a2d86", 00:13:45.807 "is_configured": true, 00:13:45.807 "data_offset": 0, 00:13:45.807 "data_size": 65536 00:13:45.807 }, 00:13:45.807 { 00:13:45.807 "name": null, 00:13:45.807 "uuid": "5f96b5a8-1173-4ebd-9c20-b82b267c0a6d", 00:13:45.807 "is_configured": false, 00:13:45.807 "data_offset": 0, 00:13:45.807 "data_size": 65536 00:13:45.807 }, 00:13:45.807 { 00:13:45.807 "name": null, 00:13:45.807 "uuid": "e1363a34-a69a-4207-90a2-34bc2036e004", 00:13:45.807 "is_configured": false, 00:13:45.807 "data_offset": 0, 00:13:45.807 "data_size": 65536 00:13:45.807 } 00:13:45.807 ] 00:13:45.807 }' 00:13:45.807 04:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.807 04:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.377 04:29:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.377 04:29:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.377 04:29:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.377 04:29:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:46.377 04:29:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.377 04:29:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:46.377 04:29:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:46.377 04:29:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.377 04:29:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.377 [2024-12-13 04:29:46.174557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:46.377 04:29:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.377 04:29:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:46.377 04:29:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:46.377 04:29:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:46.377 04:29:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:46.377 04:29:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:46.377 04:29:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:46.377 04:29:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.377 04:29:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.377 04:29:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.377 04:29:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.377 04:29:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.377 04:29:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.377 04:29:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.377 04:29:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.377 04:29:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.377 04:29:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.377 "name": "Existed_Raid", 00:13:46.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.377 "strip_size_kb": 64, 00:13:46.377 "state": "configuring", 00:13:46.377 "raid_level": "raid5f", 00:13:46.377 "superblock": false, 00:13:46.377 "num_base_bdevs": 3, 00:13:46.377 "num_base_bdevs_discovered": 2, 00:13:46.377 "num_base_bdevs_operational": 3, 00:13:46.377 "base_bdevs_list": [ 00:13:46.377 { 00:13:46.377 "name": "BaseBdev1", 00:13:46.377 "uuid": "8a1bed06-7cc1-4719-9695-1968288a2d86", 00:13:46.377 "is_configured": true, 00:13:46.377 "data_offset": 0, 00:13:46.377 "data_size": 65536 00:13:46.377 }, 00:13:46.377 { 00:13:46.377 "name": null, 00:13:46.377 "uuid": "5f96b5a8-1173-4ebd-9c20-b82b267c0a6d", 00:13:46.377 "is_configured": false, 00:13:46.377 "data_offset": 0, 00:13:46.377 "data_size": 65536 00:13:46.377 }, 00:13:46.377 { 00:13:46.377 "name": "BaseBdev3", 00:13:46.377 "uuid": "e1363a34-a69a-4207-90a2-34bc2036e004", 00:13:46.377 "is_configured": true, 00:13:46.377 "data_offset": 0, 00:13:46.377 "data_size": 65536 00:13:46.377 } 00:13:46.377 ] 00:13:46.377 }' 00:13:46.377 04:29:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.377 04:29:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.638 04:29:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.638 04:29:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.638 04:29:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:46.638 04:29:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.897 04:29:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.897 04:29:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:46.897 04:29:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:46.897 04:29:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.898 04:29:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.898 [2024-12-13 04:29:46.677681] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:46.898 04:29:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.898 04:29:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:46.898 04:29:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:46.898 04:29:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:46.898 04:29:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:46.898 04:29:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:46.898 04:29:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:46.898 04:29:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.898 04:29:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.898 04:29:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.898 04:29:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.898 04:29:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.898 04:29:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.898 04:29:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.898 04:29:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.898 04:29:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.898 04:29:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.898 "name": "Existed_Raid", 00:13:46.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.898 "strip_size_kb": 64, 00:13:46.898 "state": "configuring", 00:13:46.898 "raid_level": "raid5f", 00:13:46.898 "superblock": false, 00:13:46.898 "num_base_bdevs": 3, 00:13:46.898 "num_base_bdevs_discovered": 1, 00:13:46.898 "num_base_bdevs_operational": 3, 00:13:46.898 "base_bdevs_list": [ 00:13:46.898 { 00:13:46.898 "name": null, 00:13:46.898 "uuid": "8a1bed06-7cc1-4719-9695-1968288a2d86", 00:13:46.898 "is_configured": false, 00:13:46.898 "data_offset": 0, 00:13:46.898 "data_size": 65536 00:13:46.898 }, 00:13:46.898 { 00:13:46.898 "name": null, 00:13:46.898 "uuid": "5f96b5a8-1173-4ebd-9c20-b82b267c0a6d", 00:13:46.898 "is_configured": false, 00:13:46.898 "data_offset": 0, 00:13:46.898 "data_size": 65536 00:13:46.898 }, 00:13:46.898 { 00:13:46.898 "name": "BaseBdev3", 00:13:46.898 "uuid": "e1363a34-a69a-4207-90a2-34bc2036e004", 00:13:46.898 "is_configured": true, 00:13:46.898 "data_offset": 0, 00:13:46.898 "data_size": 65536 00:13:46.898 } 00:13:46.898 ] 00:13:46.898 }' 00:13:46.898 04:29:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.898 04:29:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.157 04:29:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.157 04:29:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:47.157 04:29:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.157 04:29:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.158 04:29:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.417 04:29:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:47.417 04:29:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:47.417 04:29:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.417 04:29:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.417 [2024-12-13 04:29:47.188995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:47.417 04:29:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.418 04:29:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:47.418 04:29:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:47.418 04:29:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:47.418 04:29:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:47.418 04:29:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:47.418 04:29:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:47.418 04:29:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.418 04:29:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.418 04:29:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.418 04:29:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.418 04:29:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.418 04:29:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:47.418 04:29:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.418 04:29:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.418 04:29:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.418 04:29:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.418 "name": "Existed_Raid", 00:13:47.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.418 "strip_size_kb": 64, 00:13:47.418 "state": "configuring", 00:13:47.418 "raid_level": "raid5f", 00:13:47.418 "superblock": false, 00:13:47.418 "num_base_bdevs": 3, 00:13:47.418 "num_base_bdevs_discovered": 2, 00:13:47.418 "num_base_bdevs_operational": 3, 00:13:47.418 "base_bdevs_list": [ 00:13:47.418 { 00:13:47.418 "name": null, 00:13:47.418 "uuid": "8a1bed06-7cc1-4719-9695-1968288a2d86", 00:13:47.418 "is_configured": false, 00:13:47.418 "data_offset": 0, 00:13:47.418 "data_size": 65536 00:13:47.418 }, 00:13:47.418 { 00:13:47.418 "name": "BaseBdev2", 00:13:47.418 "uuid": "5f96b5a8-1173-4ebd-9c20-b82b267c0a6d", 00:13:47.418 "is_configured": true, 00:13:47.418 "data_offset": 0, 00:13:47.418 "data_size": 65536 00:13:47.418 }, 00:13:47.418 { 00:13:47.418 "name": "BaseBdev3", 00:13:47.418 "uuid": "e1363a34-a69a-4207-90a2-34bc2036e004", 00:13:47.418 "is_configured": true, 00:13:47.418 "data_offset": 0, 00:13:47.418 "data_size": 65536 00:13:47.418 } 00:13:47.418 ] 00:13:47.418 }' 00:13:47.418 04:29:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.418 04:29:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.678 04:29:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.678 04:29:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:47.678 04:29:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.678 04:29:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.938 04:29:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.938 04:29:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:47.938 04:29:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:47.938 04:29:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.938 04:29:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.938 04:29:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.938 04:29:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.938 04:29:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8a1bed06-7cc1-4719-9695-1968288a2d86 00:13:47.938 04:29:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.938 04:29:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.938 [2024-12-13 04:29:47.798927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:47.938 [2024-12-13 04:29:47.799014] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:13:47.938 [2024-12-13 04:29:47.799041] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:47.938 [2024-12-13 04:29:47.799327] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:13:47.938 [2024-12-13 04:29:47.799794] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:13:47.938 [2024-12-13 04:29:47.799840] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:13:47.938 [2024-12-13 04:29:47.800075] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:47.938 NewBaseBdev 00:13:47.938 04:29:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.938 04:29:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:47.938 04:29:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:47.938 04:29:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:47.938 04:29:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:47.938 04:29:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:47.938 04:29:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:47.938 04:29:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:47.938 04:29:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.938 04:29:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.938 04:29:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.938 04:29:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:47.938 04:29:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.938 04:29:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.938 [ 00:13:47.938 { 00:13:47.938 "name": "NewBaseBdev", 00:13:47.938 "aliases": [ 00:13:47.938 "8a1bed06-7cc1-4719-9695-1968288a2d86" 00:13:47.938 ], 00:13:47.938 "product_name": "Malloc disk", 00:13:47.938 "block_size": 512, 00:13:47.938 "num_blocks": 65536, 00:13:47.938 "uuid": "8a1bed06-7cc1-4719-9695-1968288a2d86", 00:13:47.938 "assigned_rate_limits": { 00:13:47.938 "rw_ios_per_sec": 0, 00:13:47.939 "rw_mbytes_per_sec": 0, 00:13:47.939 "r_mbytes_per_sec": 0, 00:13:47.939 "w_mbytes_per_sec": 0 00:13:47.939 }, 00:13:47.939 "claimed": true, 00:13:47.939 "claim_type": "exclusive_write", 00:13:47.939 "zoned": false, 00:13:47.939 "supported_io_types": { 00:13:47.939 "read": true, 00:13:47.939 "write": true, 00:13:47.939 "unmap": true, 00:13:47.939 "flush": true, 00:13:47.939 "reset": true, 00:13:47.939 "nvme_admin": false, 00:13:47.939 "nvme_io": false, 00:13:47.939 "nvme_io_md": false, 00:13:47.939 "write_zeroes": true, 00:13:47.939 "zcopy": true, 00:13:47.939 "get_zone_info": false, 00:13:47.939 "zone_management": false, 00:13:47.939 "zone_append": false, 00:13:47.939 "compare": false, 00:13:47.939 "compare_and_write": false, 00:13:47.939 "abort": true, 00:13:47.939 "seek_hole": false, 00:13:47.939 "seek_data": false, 00:13:47.939 "copy": true, 00:13:47.939 "nvme_iov_md": false 00:13:47.939 }, 00:13:47.939 "memory_domains": [ 00:13:47.939 { 00:13:47.939 "dma_device_id": "system", 00:13:47.939 "dma_device_type": 1 00:13:47.939 }, 00:13:47.939 { 00:13:47.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.939 "dma_device_type": 2 00:13:47.939 } 00:13:47.939 ], 00:13:47.939 "driver_specific": {} 00:13:47.939 } 00:13:47.939 ] 00:13:47.939 04:29:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.939 04:29:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:47.939 04:29:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:47.939 04:29:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:47.939 04:29:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:47.939 04:29:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:47.939 04:29:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:47.939 04:29:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:47.939 04:29:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.939 04:29:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.939 04:29:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.939 04:29:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.939 04:29:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.939 04:29:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:47.939 04:29:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.939 04:29:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.939 04:29:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.939 04:29:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.939 "name": "Existed_Raid", 00:13:47.939 "uuid": "56f5bd9f-fc3f-4cd4-9bcc-7e3f5a630d0b", 00:13:47.939 "strip_size_kb": 64, 00:13:47.939 "state": "online", 00:13:47.939 "raid_level": "raid5f", 00:13:47.939 "superblock": false, 00:13:47.939 "num_base_bdevs": 3, 00:13:47.939 "num_base_bdevs_discovered": 3, 00:13:47.939 "num_base_bdevs_operational": 3, 00:13:47.939 "base_bdevs_list": [ 00:13:47.939 { 00:13:47.939 "name": "NewBaseBdev", 00:13:47.939 "uuid": "8a1bed06-7cc1-4719-9695-1968288a2d86", 00:13:47.939 "is_configured": true, 00:13:47.939 "data_offset": 0, 00:13:47.939 "data_size": 65536 00:13:47.939 }, 00:13:47.939 { 00:13:47.939 "name": "BaseBdev2", 00:13:47.939 "uuid": "5f96b5a8-1173-4ebd-9c20-b82b267c0a6d", 00:13:47.939 "is_configured": true, 00:13:47.939 "data_offset": 0, 00:13:47.939 "data_size": 65536 00:13:47.939 }, 00:13:47.939 { 00:13:47.939 "name": "BaseBdev3", 00:13:47.939 "uuid": "e1363a34-a69a-4207-90a2-34bc2036e004", 00:13:47.939 "is_configured": true, 00:13:47.939 "data_offset": 0, 00:13:47.939 "data_size": 65536 00:13:47.939 } 00:13:47.939 ] 00:13:47.939 }' 00:13:47.939 04:29:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.939 04:29:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.509 04:29:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:48.509 04:29:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:48.509 04:29:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:48.509 04:29:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:48.509 04:29:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:48.509 04:29:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:48.510 04:29:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:48.510 04:29:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:48.510 04:29:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.510 04:29:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.510 [2024-12-13 04:29:48.262365] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:48.510 04:29:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.510 04:29:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:48.510 "name": "Existed_Raid", 00:13:48.510 "aliases": [ 00:13:48.510 "56f5bd9f-fc3f-4cd4-9bcc-7e3f5a630d0b" 00:13:48.510 ], 00:13:48.510 "product_name": "Raid Volume", 00:13:48.510 "block_size": 512, 00:13:48.510 "num_blocks": 131072, 00:13:48.510 "uuid": "56f5bd9f-fc3f-4cd4-9bcc-7e3f5a630d0b", 00:13:48.510 "assigned_rate_limits": { 00:13:48.510 "rw_ios_per_sec": 0, 00:13:48.510 "rw_mbytes_per_sec": 0, 00:13:48.510 "r_mbytes_per_sec": 0, 00:13:48.510 "w_mbytes_per_sec": 0 00:13:48.510 }, 00:13:48.510 "claimed": false, 00:13:48.510 "zoned": false, 00:13:48.510 "supported_io_types": { 00:13:48.510 "read": true, 00:13:48.510 "write": true, 00:13:48.510 "unmap": false, 00:13:48.510 "flush": false, 00:13:48.510 "reset": true, 00:13:48.510 "nvme_admin": false, 00:13:48.510 "nvme_io": false, 00:13:48.510 "nvme_io_md": false, 00:13:48.510 "write_zeroes": true, 00:13:48.510 "zcopy": false, 00:13:48.510 "get_zone_info": false, 00:13:48.510 "zone_management": false, 00:13:48.510 "zone_append": false, 00:13:48.510 "compare": false, 00:13:48.510 "compare_and_write": false, 00:13:48.510 "abort": false, 00:13:48.510 "seek_hole": false, 00:13:48.510 "seek_data": false, 00:13:48.510 "copy": false, 00:13:48.510 "nvme_iov_md": false 00:13:48.510 }, 00:13:48.510 "driver_specific": { 00:13:48.510 "raid": { 00:13:48.510 "uuid": "56f5bd9f-fc3f-4cd4-9bcc-7e3f5a630d0b", 00:13:48.510 "strip_size_kb": 64, 00:13:48.510 "state": "online", 00:13:48.510 "raid_level": "raid5f", 00:13:48.510 "superblock": false, 00:13:48.510 "num_base_bdevs": 3, 00:13:48.510 "num_base_bdevs_discovered": 3, 00:13:48.510 "num_base_bdevs_operational": 3, 00:13:48.510 "base_bdevs_list": [ 00:13:48.510 { 00:13:48.510 "name": "NewBaseBdev", 00:13:48.510 "uuid": "8a1bed06-7cc1-4719-9695-1968288a2d86", 00:13:48.510 "is_configured": true, 00:13:48.510 "data_offset": 0, 00:13:48.510 "data_size": 65536 00:13:48.510 }, 00:13:48.510 { 00:13:48.510 "name": "BaseBdev2", 00:13:48.510 "uuid": "5f96b5a8-1173-4ebd-9c20-b82b267c0a6d", 00:13:48.510 "is_configured": true, 00:13:48.510 "data_offset": 0, 00:13:48.510 "data_size": 65536 00:13:48.510 }, 00:13:48.510 { 00:13:48.510 "name": "BaseBdev3", 00:13:48.510 "uuid": "e1363a34-a69a-4207-90a2-34bc2036e004", 00:13:48.510 "is_configured": true, 00:13:48.510 "data_offset": 0, 00:13:48.510 "data_size": 65536 00:13:48.510 } 00:13:48.510 ] 00:13:48.510 } 00:13:48.510 } 00:13:48.510 }' 00:13:48.510 04:29:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:48.510 04:29:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:48.510 BaseBdev2 00:13:48.510 BaseBdev3' 00:13:48.510 04:29:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:48.510 04:29:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:48.510 04:29:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:48.510 04:29:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:48.510 04:29:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:48.510 04:29:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.510 04:29:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.510 04:29:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.510 04:29:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:48.510 04:29:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:48.510 04:29:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:48.510 04:29:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:48.510 04:29:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:48.510 04:29:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.510 04:29:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.510 04:29:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.510 04:29:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:48.510 04:29:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:48.510 04:29:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:48.510 04:29:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:48.510 04:29:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:48.510 04:29:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.510 04:29:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.510 04:29:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.770 04:29:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:48.770 04:29:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:48.770 04:29:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:48.770 04:29:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.770 04:29:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.770 [2024-12-13 04:29:48.541711] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:48.770 [2024-12-13 04:29:48.541775] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:48.770 [2024-12-13 04:29:48.541884] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:48.770 [2024-12-13 04:29:48.542189] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:48.770 [2024-12-13 04:29:48.542241] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:13:48.770 04:29:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.770 04:29:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 92183 00:13:48.770 04:29:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 92183 ']' 00:13:48.770 04:29:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 92183 00:13:48.770 04:29:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:13:48.770 04:29:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:48.770 04:29:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92183 00:13:48.770 04:29:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:48.770 killing process with pid 92183 00:13:48.770 04:29:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:48.770 04:29:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92183' 00:13:48.770 04:29:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 92183 00:13:48.770 [2024-12-13 04:29:48.592314] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:48.770 04:29:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 92183 00:13:48.770 [2024-12-13 04:29:48.650796] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:49.030 04:29:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:49.030 00:13:49.030 real 0m9.365s 00:13:49.030 user 0m15.738s 00:13:49.030 sys 0m2.069s 00:13:49.030 ************************************ 00:13:49.030 END TEST raid5f_state_function_test 00:13:49.030 ************************************ 00:13:49.031 04:29:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:49.031 04:29:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.291 04:29:49 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:13:49.291 04:29:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:49.291 04:29:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:49.291 04:29:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:49.291 ************************************ 00:13:49.291 START TEST raid5f_state_function_test_sb 00:13:49.291 ************************************ 00:13:49.291 04:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:13:49.291 04:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:49.291 04:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:49.291 04:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:49.291 04:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:49.291 04:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:49.291 04:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:49.291 04:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:49.291 04:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:49.291 04:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:49.291 04:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:49.291 04:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:49.291 04:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:49.291 04:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:49.291 04:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:49.291 04:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:49.291 04:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:49.291 04:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:49.291 04:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:49.291 04:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:49.291 04:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:49.291 04:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:49.291 04:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:49.291 04:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:49.291 04:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:49.291 04:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:49.291 04:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:49.291 Process raid pid: 92793 00:13:49.291 04:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=92793 00:13:49.291 04:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:49.291 04:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 92793' 00:13:49.291 04:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 92793 00:13:49.291 04:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 92793 ']' 00:13:49.291 04:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.291 04:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:49.291 04:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.291 04:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:49.291 04:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.291 [2024-12-13 04:29:49.159457] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:13:49.291 [2024-12-13 04:29:49.159651] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:49.291 [2024-12-13 04:29:49.292486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.552 [2024-12-13 04:29:49.330956] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.552 [2024-12-13 04:29:49.408147] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:49.552 [2024-12-13 04:29:49.408278] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:50.121 04:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:50.121 04:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:50.121 04:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:50.121 04:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.121 04:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.121 [2024-12-13 04:29:49.979788] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:50.121 [2024-12-13 04:29:49.979910] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:50.121 [2024-12-13 04:29:49.979944] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:50.121 [2024-12-13 04:29:49.979967] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:50.121 [2024-12-13 04:29:49.979985] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:50.121 [2024-12-13 04:29:49.980025] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:50.121 04:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.121 04:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:50.121 04:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:50.121 04:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:50.121 04:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:50.121 04:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:50.122 04:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:50.122 04:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.122 04:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.122 04:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.122 04:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.122 04:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.122 04:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:50.122 04:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.122 04:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.122 04:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.122 04:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.122 "name": "Existed_Raid", 00:13:50.122 "uuid": "b34610d0-c42e-4dc3-bae2-b30f1e94d279", 00:13:50.122 "strip_size_kb": 64, 00:13:50.122 "state": "configuring", 00:13:50.122 "raid_level": "raid5f", 00:13:50.122 "superblock": true, 00:13:50.122 "num_base_bdevs": 3, 00:13:50.122 "num_base_bdevs_discovered": 0, 00:13:50.122 "num_base_bdevs_operational": 3, 00:13:50.122 "base_bdevs_list": [ 00:13:50.122 { 00:13:50.122 "name": "BaseBdev1", 00:13:50.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.122 "is_configured": false, 00:13:50.122 "data_offset": 0, 00:13:50.122 "data_size": 0 00:13:50.122 }, 00:13:50.122 { 00:13:50.122 "name": "BaseBdev2", 00:13:50.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.122 "is_configured": false, 00:13:50.122 "data_offset": 0, 00:13:50.122 "data_size": 0 00:13:50.122 }, 00:13:50.122 { 00:13:50.122 "name": "BaseBdev3", 00:13:50.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.122 "is_configured": false, 00:13:50.122 "data_offset": 0, 00:13:50.122 "data_size": 0 00:13:50.122 } 00:13:50.122 ] 00:13:50.122 }' 00:13:50.122 04:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.122 04:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.692 04:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:50.692 04:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.692 04:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.692 [2024-12-13 04:29:50.418898] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:50.692 [2024-12-13 04:29:50.418975] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:13:50.692 04:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.692 04:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:50.692 04:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.692 04:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.692 [2024-12-13 04:29:50.430908] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:50.692 [2024-12-13 04:29:50.430994] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:50.692 [2024-12-13 04:29:50.431019] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:50.692 [2024-12-13 04:29:50.431042] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:50.692 [2024-12-13 04:29:50.431059] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:50.692 [2024-12-13 04:29:50.431079] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:50.692 04:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.692 04:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:50.692 04:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.692 04:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.692 [2024-12-13 04:29:50.458053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:50.692 BaseBdev1 00:13:50.692 04:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.692 04:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:50.692 04:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:50.692 04:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:50.692 04:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:50.692 04:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:50.692 04:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:50.692 04:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:50.692 04:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.692 04:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.692 04:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.692 04:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:50.692 04:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.692 04:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.692 [ 00:13:50.692 { 00:13:50.692 "name": "BaseBdev1", 00:13:50.692 "aliases": [ 00:13:50.692 "caa13a49-60ce-45a6-a932-cfc159881a15" 00:13:50.692 ], 00:13:50.692 "product_name": "Malloc disk", 00:13:50.692 "block_size": 512, 00:13:50.692 "num_blocks": 65536, 00:13:50.692 "uuid": "caa13a49-60ce-45a6-a932-cfc159881a15", 00:13:50.692 "assigned_rate_limits": { 00:13:50.692 "rw_ios_per_sec": 0, 00:13:50.692 "rw_mbytes_per_sec": 0, 00:13:50.692 "r_mbytes_per_sec": 0, 00:13:50.692 "w_mbytes_per_sec": 0 00:13:50.692 }, 00:13:50.692 "claimed": true, 00:13:50.692 "claim_type": "exclusive_write", 00:13:50.692 "zoned": false, 00:13:50.692 "supported_io_types": { 00:13:50.692 "read": true, 00:13:50.692 "write": true, 00:13:50.692 "unmap": true, 00:13:50.692 "flush": true, 00:13:50.692 "reset": true, 00:13:50.692 "nvme_admin": false, 00:13:50.692 "nvme_io": false, 00:13:50.692 "nvme_io_md": false, 00:13:50.692 "write_zeroes": true, 00:13:50.692 "zcopy": true, 00:13:50.692 "get_zone_info": false, 00:13:50.692 "zone_management": false, 00:13:50.692 "zone_append": false, 00:13:50.692 "compare": false, 00:13:50.692 "compare_and_write": false, 00:13:50.692 "abort": true, 00:13:50.692 "seek_hole": false, 00:13:50.692 "seek_data": false, 00:13:50.692 "copy": true, 00:13:50.692 "nvme_iov_md": false 00:13:50.692 }, 00:13:50.692 "memory_domains": [ 00:13:50.692 { 00:13:50.692 "dma_device_id": "system", 00:13:50.692 "dma_device_type": 1 00:13:50.692 }, 00:13:50.692 { 00:13:50.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.692 "dma_device_type": 2 00:13:50.692 } 00:13:50.692 ], 00:13:50.692 "driver_specific": {} 00:13:50.692 } 00:13:50.692 ] 00:13:50.692 04:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.692 04:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:50.692 04:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:50.692 04:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:50.692 04:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:50.692 04:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:50.692 04:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:50.692 04:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:50.692 04:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.692 04:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.692 04:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.692 04:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.692 04:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.693 04:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:50.693 04:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.693 04:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.693 04:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.693 04:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.693 "name": "Existed_Raid", 00:13:50.693 "uuid": "93db0069-afba-454f-8936-cfec1fa478a0", 00:13:50.693 "strip_size_kb": 64, 00:13:50.693 "state": "configuring", 00:13:50.693 "raid_level": "raid5f", 00:13:50.693 "superblock": true, 00:13:50.693 "num_base_bdevs": 3, 00:13:50.693 "num_base_bdevs_discovered": 1, 00:13:50.693 "num_base_bdevs_operational": 3, 00:13:50.693 "base_bdevs_list": [ 00:13:50.693 { 00:13:50.693 "name": "BaseBdev1", 00:13:50.693 "uuid": "caa13a49-60ce-45a6-a932-cfc159881a15", 00:13:50.693 "is_configured": true, 00:13:50.693 "data_offset": 2048, 00:13:50.693 "data_size": 63488 00:13:50.693 }, 00:13:50.693 { 00:13:50.693 "name": "BaseBdev2", 00:13:50.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.693 "is_configured": false, 00:13:50.693 "data_offset": 0, 00:13:50.693 "data_size": 0 00:13:50.693 }, 00:13:50.693 { 00:13:50.693 "name": "BaseBdev3", 00:13:50.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.693 "is_configured": false, 00:13:50.693 "data_offset": 0, 00:13:50.693 "data_size": 0 00:13:50.693 } 00:13:50.693 ] 00:13:50.693 }' 00:13:50.693 04:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.693 04:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.953 04:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:50.953 04:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.953 04:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.953 [2024-12-13 04:29:50.909295] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:50.953 [2024-12-13 04:29:50.909384] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:13:50.953 04:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.953 04:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:50.953 04:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.953 04:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.953 [2024-12-13 04:29:50.921317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:50.953 [2024-12-13 04:29:50.923457] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:50.953 [2024-12-13 04:29:50.923547] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:50.953 [2024-12-13 04:29:50.923575] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:50.953 [2024-12-13 04:29:50.923598] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:50.953 04:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.953 04:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:50.953 04:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:50.953 04:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:50.953 04:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:50.953 04:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:50.953 04:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:50.953 04:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:50.953 04:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:50.953 04:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.953 04:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.953 04:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.953 04:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.953 04:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.953 04:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:50.953 04:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.953 04:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.953 04:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.213 04:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.213 "name": "Existed_Raid", 00:13:51.213 "uuid": "d72594f2-df4f-4cfe-a270-54d5d27bd59b", 00:13:51.213 "strip_size_kb": 64, 00:13:51.213 "state": "configuring", 00:13:51.213 "raid_level": "raid5f", 00:13:51.213 "superblock": true, 00:13:51.213 "num_base_bdevs": 3, 00:13:51.213 "num_base_bdevs_discovered": 1, 00:13:51.213 "num_base_bdevs_operational": 3, 00:13:51.213 "base_bdevs_list": [ 00:13:51.213 { 00:13:51.213 "name": "BaseBdev1", 00:13:51.213 "uuid": "caa13a49-60ce-45a6-a932-cfc159881a15", 00:13:51.213 "is_configured": true, 00:13:51.213 "data_offset": 2048, 00:13:51.213 "data_size": 63488 00:13:51.213 }, 00:13:51.213 { 00:13:51.213 "name": "BaseBdev2", 00:13:51.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.213 "is_configured": false, 00:13:51.213 "data_offset": 0, 00:13:51.213 "data_size": 0 00:13:51.213 }, 00:13:51.213 { 00:13:51.213 "name": "BaseBdev3", 00:13:51.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.213 "is_configured": false, 00:13:51.213 "data_offset": 0, 00:13:51.213 "data_size": 0 00:13:51.213 } 00:13:51.213 ] 00:13:51.213 }' 00:13:51.213 04:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.213 04:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.474 04:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:51.474 04:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.474 04:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.474 [2024-12-13 04:29:51.417203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:51.474 BaseBdev2 00:13:51.474 04:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.474 04:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:51.474 04:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:51.474 04:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:51.474 04:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:51.474 04:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:51.474 04:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:51.474 04:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:51.474 04:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.474 04:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.474 04:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.474 04:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:51.474 04:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.474 04:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.474 [ 00:13:51.474 { 00:13:51.474 "name": "BaseBdev2", 00:13:51.474 "aliases": [ 00:13:51.474 "3276319c-5340-4bff-9e96-564d8123f02a" 00:13:51.474 ], 00:13:51.474 "product_name": "Malloc disk", 00:13:51.474 "block_size": 512, 00:13:51.474 "num_blocks": 65536, 00:13:51.474 "uuid": "3276319c-5340-4bff-9e96-564d8123f02a", 00:13:51.474 "assigned_rate_limits": { 00:13:51.474 "rw_ios_per_sec": 0, 00:13:51.474 "rw_mbytes_per_sec": 0, 00:13:51.474 "r_mbytes_per_sec": 0, 00:13:51.474 "w_mbytes_per_sec": 0 00:13:51.474 }, 00:13:51.474 "claimed": true, 00:13:51.474 "claim_type": "exclusive_write", 00:13:51.474 "zoned": false, 00:13:51.474 "supported_io_types": { 00:13:51.474 "read": true, 00:13:51.474 "write": true, 00:13:51.474 "unmap": true, 00:13:51.474 "flush": true, 00:13:51.474 "reset": true, 00:13:51.474 "nvme_admin": false, 00:13:51.474 "nvme_io": false, 00:13:51.474 "nvme_io_md": false, 00:13:51.474 "write_zeroes": true, 00:13:51.474 "zcopy": true, 00:13:51.474 "get_zone_info": false, 00:13:51.474 "zone_management": false, 00:13:51.474 "zone_append": false, 00:13:51.474 "compare": false, 00:13:51.474 "compare_and_write": false, 00:13:51.474 "abort": true, 00:13:51.474 "seek_hole": false, 00:13:51.474 "seek_data": false, 00:13:51.474 "copy": true, 00:13:51.474 "nvme_iov_md": false 00:13:51.474 }, 00:13:51.474 "memory_domains": [ 00:13:51.474 { 00:13:51.474 "dma_device_id": "system", 00:13:51.474 "dma_device_type": 1 00:13:51.474 }, 00:13:51.474 { 00:13:51.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.474 "dma_device_type": 2 00:13:51.474 } 00:13:51.474 ], 00:13:51.474 "driver_specific": {} 00:13:51.474 } 00:13:51.474 ] 00:13:51.474 04:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.474 04:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:51.474 04:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:51.474 04:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:51.474 04:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:51.474 04:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:51.474 04:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:51.474 04:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:51.474 04:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:51.474 04:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:51.474 04:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.474 04:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.474 04:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.474 04:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.474 04:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.474 04:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.474 04:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.474 04:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.474 04:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.733 04:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.733 "name": "Existed_Raid", 00:13:51.733 "uuid": "d72594f2-df4f-4cfe-a270-54d5d27bd59b", 00:13:51.733 "strip_size_kb": 64, 00:13:51.733 "state": "configuring", 00:13:51.733 "raid_level": "raid5f", 00:13:51.733 "superblock": true, 00:13:51.733 "num_base_bdevs": 3, 00:13:51.733 "num_base_bdevs_discovered": 2, 00:13:51.733 "num_base_bdevs_operational": 3, 00:13:51.733 "base_bdevs_list": [ 00:13:51.733 { 00:13:51.733 "name": "BaseBdev1", 00:13:51.733 "uuid": "caa13a49-60ce-45a6-a932-cfc159881a15", 00:13:51.733 "is_configured": true, 00:13:51.733 "data_offset": 2048, 00:13:51.733 "data_size": 63488 00:13:51.733 }, 00:13:51.733 { 00:13:51.733 "name": "BaseBdev2", 00:13:51.733 "uuid": "3276319c-5340-4bff-9e96-564d8123f02a", 00:13:51.733 "is_configured": true, 00:13:51.733 "data_offset": 2048, 00:13:51.733 "data_size": 63488 00:13:51.733 }, 00:13:51.733 { 00:13:51.733 "name": "BaseBdev3", 00:13:51.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.733 "is_configured": false, 00:13:51.733 "data_offset": 0, 00:13:51.733 "data_size": 0 00:13:51.733 } 00:13:51.733 ] 00:13:51.733 }' 00:13:51.733 04:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.733 04:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.993 04:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:51.993 04:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.993 04:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.993 [2024-12-13 04:29:51.883914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:51.993 BaseBdev3 00:13:51.993 [2024-12-13 04:29:51.884688] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:13:51.993 [2024-12-13 04:29:51.884791] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:51.993 04:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.993 [2024-12-13 04:29:51.885779] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:13:51.993 04:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:51.993 04:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:51.993 04:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:51.993 04:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:51.993 04:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:51.993 [2024-12-13 04:29:51.887250] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:13:51.993 [2024-12-13 04:29:51.887301] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:13:51.993 04:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:51.993 [2024-12-13 04:29:51.887692] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:51.993 04:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:51.993 04:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.993 04:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.993 04:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.993 04:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:51.993 04:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.993 04:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.993 [ 00:13:51.993 { 00:13:51.993 "name": "BaseBdev3", 00:13:51.993 "aliases": [ 00:13:51.993 "ca3a57fd-5f29-4ce1-b8ea-87271e4cab02" 00:13:51.993 ], 00:13:51.993 "product_name": "Malloc disk", 00:13:51.993 "block_size": 512, 00:13:51.993 "num_blocks": 65536, 00:13:51.993 "uuid": "ca3a57fd-5f29-4ce1-b8ea-87271e4cab02", 00:13:51.993 "assigned_rate_limits": { 00:13:51.993 "rw_ios_per_sec": 0, 00:13:51.993 "rw_mbytes_per_sec": 0, 00:13:51.993 "r_mbytes_per_sec": 0, 00:13:51.993 "w_mbytes_per_sec": 0 00:13:51.993 }, 00:13:51.993 "claimed": true, 00:13:51.993 "claim_type": "exclusive_write", 00:13:51.993 "zoned": false, 00:13:51.993 "supported_io_types": { 00:13:51.993 "read": true, 00:13:51.993 "write": true, 00:13:51.993 "unmap": true, 00:13:51.993 "flush": true, 00:13:51.993 "reset": true, 00:13:51.993 "nvme_admin": false, 00:13:51.993 "nvme_io": false, 00:13:51.993 "nvme_io_md": false, 00:13:51.993 "write_zeroes": true, 00:13:51.993 "zcopy": true, 00:13:51.993 "get_zone_info": false, 00:13:51.993 "zone_management": false, 00:13:51.993 "zone_append": false, 00:13:51.993 "compare": false, 00:13:51.993 "compare_and_write": false, 00:13:51.993 "abort": true, 00:13:51.993 "seek_hole": false, 00:13:51.993 "seek_data": false, 00:13:51.993 "copy": true, 00:13:51.993 "nvme_iov_md": false 00:13:51.993 }, 00:13:51.993 "memory_domains": [ 00:13:51.993 { 00:13:51.993 "dma_device_id": "system", 00:13:51.993 "dma_device_type": 1 00:13:51.993 }, 00:13:51.993 { 00:13:51.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.993 "dma_device_type": 2 00:13:51.993 } 00:13:51.993 ], 00:13:51.993 "driver_specific": {} 00:13:51.993 } 00:13:51.993 ] 00:13:51.993 04:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.993 04:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:51.993 04:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:51.993 04:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:51.993 04:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:51.993 04:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:51.993 04:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:51.993 04:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:51.993 04:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:51.993 04:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:51.993 04:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.993 04:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.993 04:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.993 04:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.993 04:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.993 04:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.993 04:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.993 04:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.993 04:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.993 04:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.993 "name": "Existed_Raid", 00:13:51.993 "uuid": "d72594f2-df4f-4cfe-a270-54d5d27bd59b", 00:13:51.993 "strip_size_kb": 64, 00:13:51.993 "state": "online", 00:13:51.993 "raid_level": "raid5f", 00:13:51.993 "superblock": true, 00:13:51.993 "num_base_bdevs": 3, 00:13:51.993 "num_base_bdevs_discovered": 3, 00:13:51.993 "num_base_bdevs_operational": 3, 00:13:51.993 "base_bdevs_list": [ 00:13:51.993 { 00:13:51.993 "name": "BaseBdev1", 00:13:51.993 "uuid": "caa13a49-60ce-45a6-a932-cfc159881a15", 00:13:51.993 "is_configured": true, 00:13:51.993 "data_offset": 2048, 00:13:51.993 "data_size": 63488 00:13:51.993 }, 00:13:51.993 { 00:13:51.993 "name": "BaseBdev2", 00:13:51.993 "uuid": "3276319c-5340-4bff-9e96-564d8123f02a", 00:13:51.993 "is_configured": true, 00:13:51.993 "data_offset": 2048, 00:13:51.993 "data_size": 63488 00:13:51.993 }, 00:13:51.993 { 00:13:51.993 "name": "BaseBdev3", 00:13:51.993 "uuid": "ca3a57fd-5f29-4ce1-b8ea-87271e4cab02", 00:13:51.993 "is_configured": true, 00:13:51.993 "data_offset": 2048, 00:13:51.993 "data_size": 63488 00:13:51.993 } 00:13:51.993 ] 00:13:51.993 }' 00:13:51.993 04:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.993 04:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.563 04:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:52.563 04:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:52.563 04:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:52.563 04:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:52.563 04:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:52.563 04:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:52.563 04:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:52.563 04:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.563 04:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.563 04:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:52.563 [2024-12-13 04:29:52.374174] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:52.563 04:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.563 04:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:52.563 "name": "Existed_Raid", 00:13:52.563 "aliases": [ 00:13:52.563 "d72594f2-df4f-4cfe-a270-54d5d27bd59b" 00:13:52.563 ], 00:13:52.563 "product_name": "Raid Volume", 00:13:52.563 "block_size": 512, 00:13:52.563 "num_blocks": 126976, 00:13:52.563 "uuid": "d72594f2-df4f-4cfe-a270-54d5d27bd59b", 00:13:52.563 "assigned_rate_limits": { 00:13:52.563 "rw_ios_per_sec": 0, 00:13:52.563 "rw_mbytes_per_sec": 0, 00:13:52.563 "r_mbytes_per_sec": 0, 00:13:52.563 "w_mbytes_per_sec": 0 00:13:52.563 }, 00:13:52.563 "claimed": false, 00:13:52.563 "zoned": false, 00:13:52.563 "supported_io_types": { 00:13:52.563 "read": true, 00:13:52.563 "write": true, 00:13:52.563 "unmap": false, 00:13:52.563 "flush": false, 00:13:52.563 "reset": true, 00:13:52.563 "nvme_admin": false, 00:13:52.563 "nvme_io": false, 00:13:52.563 "nvme_io_md": false, 00:13:52.563 "write_zeroes": true, 00:13:52.563 "zcopy": false, 00:13:52.563 "get_zone_info": false, 00:13:52.563 "zone_management": false, 00:13:52.563 "zone_append": false, 00:13:52.563 "compare": false, 00:13:52.563 "compare_and_write": false, 00:13:52.563 "abort": false, 00:13:52.563 "seek_hole": false, 00:13:52.563 "seek_data": false, 00:13:52.563 "copy": false, 00:13:52.563 "nvme_iov_md": false 00:13:52.563 }, 00:13:52.563 "driver_specific": { 00:13:52.563 "raid": { 00:13:52.563 "uuid": "d72594f2-df4f-4cfe-a270-54d5d27bd59b", 00:13:52.563 "strip_size_kb": 64, 00:13:52.563 "state": "online", 00:13:52.563 "raid_level": "raid5f", 00:13:52.563 "superblock": true, 00:13:52.563 "num_base_bdevs": 3, 00:13:52.563 "num_base_bdevs_discovered": 3, 00:13:52.563 "num_base_bdevs_operational": 3, 00:13:52.563 "base_bdevs_list": [ 00:13:52.563 { 00:13:52.563 "name": "BaseBdev1", 00:13:52.563 "uuid": "caa13a49-60ce-45a6-a932-cfc159881a15", 00:13:52.563 "is_configured": true, 00:13:52.563 "data_offset": 2048, 00:13:52.563 "data_size": 63488 00:13:52.563 }, 00:13:52.563 { 00:13:52.563 "name": "BaseBdev2", 00:13:52.563 "uuid": "3276319c-5340-4bff-9e96-564d8123f02a", 00:13:52.563 "is_configured": true, 00:13:52.563 "data_offset": 2048, 00:13:52.563 "data_size": 63488 00:13:52.563 }, 00:13:52.563 { 00:13:52.563 "name": "BaseBdev3", 00:13:52.563 "uuid": "ca3a57fd-5f29-4ce1-b8ea-87271e4cab02", 00:13:52.563 "is_configured": true, 00:13:52.563 "data_offset": 2048, 00:13:52.563 "data_size": 63488 00:13:52.563 } 00:13:52.563 ] 00:13:52.563 } 00:13:52.563 } 00:13:52.563 }' 00:13:52.563 04:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:52.563 04:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:52.563 BaseBdev2 00:13:52.563 BaseBdev3' 00:13:52.563 04:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:52.563 04:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:52.563 04:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:52.563 04:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:52.563 04:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:52.563 04:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.563 04:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.563 04:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.563 04:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:52.563 04:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:52.563 04:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:52.563 04:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:52.563 04:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:52.563 04:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.563 04:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.563 04:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.823 04:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:52.823 04:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:52.823 04:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:52.823 04:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:52.823 04:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:52.823 04:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.823 04:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.823 04:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.823 04:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:52.823 04:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:52.823 04:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:52.823 04:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.823 04:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.823 [2024-12-13 04:29:52.657561] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:52.823 04:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.823 04:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:52.823 04:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:13:52.823 04:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:52.823 04:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:13:52.823 04:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:52.823 04:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:13:52.823 04:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:52.823 04:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.823 04:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:52.823 04:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:52.823 04:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:52.823 04:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.823 04:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.823 04:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.823 04:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.823 04:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.823 04:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.823 04:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.823 04:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.823 04:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.823 04:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.823 "name": "Existed_Raid", 00:13:52.823 "uuid": "d72594f2-df4f-4cfe-a270-54d5d27bd59b", 00:13:52.823 "strip_size_kb": 64, 00:13:52.823 "state": "online", 00:13:52.823 "raid_level": "raid5f", 00:13:52.823 "superblock": true, 00:13:52.823 "num_base_bdevs": 3, 00:13:52.823 "num_base_bdevs_discovered": 2, 00:13:52.823 "num_base_bdevs_operational": 2, 00:13:52.823 "base_bdevs_list": [ 00:13:52.823 { 00:13:52.823 "name": null, 00:13:52.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.823 "is_configured": false, 00:13:52.823 "data_offset": 0, 00:13:52.823 "data_size": 63488 00:13:52.823 }, 00:13:52.823 { 00:13:52.823 "name": "BaseBdev2", 00:13:52.823 "uuid": "3276319c-5340-4bff-9e96-564d8123f02a", 00:13:52.823 "is_configured": true, 00:13:52.823 "data_offset": 2048, 00:13:52.823 "data_size": 63488 00:13:52.823 }, 00:13:52.823 { 00:13:52.823 "name": "BaseBdev3", 00:13:52.823 "uuid": "ca3a57fd-5f29-4ce1-b8ea-87271e4cab02", 00:13:52.823 "is_configured": true, 00:13:52.823 "data_offset": 2048, 00:13:52.823 "data_size": 63488 00:13:52.823 } 00:13:52.823 ] 00:13:52.823 }' 00:13:52.823 04:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.823 04:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.083 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:53.083 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:53.083 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:53.083 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.083 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.083 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.083 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.083 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:53.083 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:53.083 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:53.083 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.083 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.083 [2024-12-13 04:29:53.085358] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:53.083 [2024-12-13 04:29:53.085567] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:53.344 [2024-12-13 04:29:53.106221] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.344 [2024-12-13 04:29:53.166161] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:53.344 [2024-12-13 04:29:53.166205] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.344 BaseBdev2 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.344 [ 00:13:53.344 { 00:13:53.344 "name": "BaseBdev2", 00:13:53.344 "aliases": [ 00:13:53.344 "5d6295c6-0237-498b-a0f5-ce3cacfd7ead" 00:13:53.344 ], 00:13:53.344 "product_name": "Malloc disk", 00:13:53.344 "block_size": 512, 00:13:53.344 "num_blocks": 65536, 00:13:53.344 "uuid": "5d6295c6-0237-498b-a0f5-ce3cacfd7ead", 00:13:53.344 "assigned_rate_limits": { 00:13:53.344 "rw_ios_per_sec": 0, 00:13:53.344 "rw_mbytes_per_sec": 0, 00:13:53.344 "r_mbytes_per_sec": 0, 00:13:53.344 "w_mbytes_per_sec": 0 00:13:53.344 }, 00:13:53.344 "claimed": false, 00:13:53.344 "zoned": false, 00:13:53.344 "supported_io_types": { 00:13:53.344 "read": true, 00:13:53.344 "write": true, 00:13:53.344 "unmap": true, 00:13:53.344 "flush": true, 00:13:53.344 "reset": true, 00:13:53.344 "nvme_admin": false, 00:13:53.344 "nvme_io": false, 00:13:53.344 "nvme_io_md": false, 00:13:53.344 "write_zeroes": true, 00:13:53.344 "zcopy": true, 00:13:53.344 "get_zone_info": false, 00:13:53.344 "zone_management": false, 00:13:53.344 "zone_append": false, 00:13:53.344 "compare": false, 00:13:53.344 "compare_and_write": false, 00:13:53.344 "abort": true, 00:13:53.344 "seek_hole": false, 00:13:53.344 "seek_data": false, 00:13:53.344 "copy": true, 00:13:53.344 "nvme_iov_md": false 00:13:53.344 }, 00:13:53.344 "memory_domains": [ 00:13:53.344 { 00:13:53.344 "dma_device_id": "system", 00:13:53.344 "dma_device_type": 1 00:13:53.344 }, 00:13:53.344 { 00:13:53.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.344 "dma_device_type": 2 00:13:53.344 } 00:13:53.344 ], 00:13:53.344 "driver_specific": {} 00:13:53.344 } 00:13:53.344 ] 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.344 BaseBdev3 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.344 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.344 [ 00:13:53.344 { 00:13:53.344 "name": "BaseBdev3", 00:13:53.344 "aliases": [ 00:13:53.344 "924361a3-129e-454f-b52b-b4d7e1586a5d" 00:13:53.344 ], 00:13:53.344 "product_name": "Malloc disk", 00:13:53.344 "block_size": 512, 00:13:53.344 "num_blocks": 65536, 00:13:53.344 "uuid": "924361a3-129e-454f-b52b-b4d7e1586a5d", 00:13:53.344 "assigned_rate_limits": { 00:13:53.344 "rw_ios_per_sec": 0, 00:13:53.344 "rw_mbytes_per_sec": 0, 00:13:53.344 "r_mbytes_per_sec": 0, 00:13:53.344 "w_mbytes_per_sec": 0 00:13:53.344 }, 00:13:53.345 "claimed": false, 00:13:53.345 "zoned": false, 00:13:53.345 "supported_io_types": { 00:13:53.345 "read": true, 00:13:53.345 "write": true, 00:13:53.345 "unmap": true, 00:13:53.345 "flush": true, 00:13:53.345 "reset": true, 00:13:53.345 "nvme_admin": false, 00:13:53.345 "nvme_io": false, 00:13:53.345 "nvme_io_md": false, 00:13:53.345 "write_zeroes": true, 00:13:53.345 "zcopy": true, 00:13:53.345 "get_zone_info": false, 00:13:53.345 "zone_management": false, 00:13:53.345 "zone_append": false, 00:13:53.345 "compare": false, 00:13:53.345 "compare_and_write": false, 00:13:53.345 "abort": true, 00:13:53.345 "seek_hole": false, 00:13:53.345 "seek_data": false, 00:13:53.345 "copy": true, 00:13:53.345 "nvme_iov_md": false 00:13:53.345 }, 00:13:53.345 "memory_domains": [ 00:13:53.345 { 00:13:53.345 "dma_device_id": "system", 00:13:53.345 "dma_device_type": 1 00:13:53.345 }, 00:13:53.345 { 00:13:53.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.345 "dma_device_type": 2 00:13:53.345 } 00:13:53.345 ], 00:13:53.345 "driver_specific": {} 00:13:53.345 } 00:13:53.345 ] 00:13:53.345 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.345 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:53.345 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:53.345 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:53.345 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:53.345 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.345 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.345 [2024-12-13 04:29:53.356084] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:53.345 [2024-12-13 04:29:53.356130] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:53.345 [2024-12-13 04:29:53.356151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:53.345 [2024-12-13 04:29:53.358315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:53.605 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.605 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:53.605 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:53.605 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:53.605 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:53.605 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:53.605 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:53.605 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.605 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.605 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.605 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.605 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.605 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.605 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.605 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.605 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.605 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.605 "name": "Existed_Raid", 00:13:53.605 "uuid": "c2ec74b9-6c65-4cd3-a5b9-fc571444a672", 00:13:53.605 "strip_size_kb": 64, 00:13:53.605 "state": "configuring", 00:13:53.605 "raid_level": "raid5f", 00:13:53.605 "superblock": true, 00:13:53.605 "num_base_bdevs": 3, 00:13:53.605 "num_base_bdevs_discovered": 2, 00:13:53.605 "num_base_bdevs_operational": 3, 00:13:53.605 "base_bdevs_list": [ 00:13:53.605 { 00:13:53.605 "name": "BaseBdev1", 00:13:53.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.605 "is_configured": false, 00:13:53.605 "data_offset": 0, 00:13:53.605 "data_size": 0 00:13:53.605 }, 00:13:53.605 { 00:13:53.605 "name": "BaseBdev2", 00:13:53.605 "uuid": "5d6295c6-0237-498b-a0f5-ce3cacfd7ead", 00:13:53.605 "is_configured": true, 00:13:53.605 "data_offset": 2048, 00:13:53.605 "data_size": 63488 00:13:53.605 }, 00:13:53.605 { 00:13:53.605 "name": "BaseBdev3", 00:13:53.605 "uuid": "924361a3-129e-454f-b52b-b4d7e1586a5d", 00:13:53.605 "is_configured": true, 00:13:53.605 "data_offset": 2048, 00:13:53.605 "data_size": 63488 00:13:53.605 } 00:13:53.605 ] 00:13:53.605 }' 00:13:53.605 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.605 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.865 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:53.865 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.865 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.865 [2024-12-13 04:29:53.755380] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:53.865 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.865 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:53.865 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:53.866 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:53.866 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:53.866 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:53.866 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:53.866 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.866 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.866 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.866 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.866 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.866 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.866 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.866 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.866 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.866 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.866 "name": "Existed_Raid", 00:13:53.866 "uuid": "c2ec74b9-6c65-4cd3-a5b9-fc571444a672", 00:13:53.866 "strip_size_kb": 64, 00:13:53.866 "state": "configuring", 00:13:53.866 "raid_level": "raid5f", 00:13:53.866 "superblock": true, 00:13:53.866 "num_base_bdevs": 3, 00:13:53.866 "num_base_bdevs_discovered": 1, 00:13:53.866 "num_base_bdevs_operational": 3, 00:13:53.866 "base_bdevs_list": [ 00:13:53.866 { 00:13:53.866 "name": "BaseBdev1", 00:13:53.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.866 "is_configured": false, 00:13:53.866 "data_offset": 0, 00:13:53.866 "data_size": 0 00:13:53.866 }, 00:13:53.866 { 00:13:53.866 "name": null, 00:13:53.866 "uuid": "5d6295c6-0237-498b-a0f5-ce3cacfd7ead", 00:13:53.866 "is_configured": false, 00:13:53.866 "data_offset": 0, 00:13:53.866 "data_size": 63488 00:13:53.866 }, 00:13:53.866 { 00:13:53.866 "name": "BaseBdev3", 00:13:53.866 "uuid": "924361a3-129e-454f-b52b-b4d7e1586a5d", 00:13:53.866 "is_configured": true, 00:13:53.866 "data_offset": 2048, 00:13:53.866 "data_size": 63488 00:13:53.866 } 00:13:53.866 ] 00:13:53.866 }' 00:13:53.866 04:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.866 04:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.434 04:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.434 04:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.434 04:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.434 04:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:54.434 04:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.434 04:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:54.434 04:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:54.434 04:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.434 04:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.434 [2024-12-13 04:29:54.267203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:54.434 BaseBdev1 00:13:54.434 04:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.434 04:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:54.434 04:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:54.434 04:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:54.434 04:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:54.434 04:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:54.434 04:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:54.434 04:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:54.434 04:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.434 04:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.434 04:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.434 04:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:54.434 04:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.434 04:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.434 [ 00:13:54.434 { 00:13:54.434 "name": "BaseBdev1", 00:13:54.434 "aliases": [ 00:13:54.434 "ffd37264-8a71-4247-befb-26a210b780eb" 00:13:54.434 ], 00:13:54.434 "product_name": "Malloc disk", 00:13:54.434 "block_size": 512, 00:13:54.434 "num_blocks": 65536, 00:13:54.434 "uuid": "ffd37264-8a71-4247-befb-26a210b780eb", 00:13:54.434 "assigned_rate_limits": { 00:13:54.434 "rw_ios_per_sec": 0, 00:13:54.434 "rw_mbytes_per_sec": 0, 00:13:54.434 "r_mbytes_per_sec": 0, 00:13:54.434 "w_mbytes_per_sec": 0 00:13:54.434 }, 00:13:54.434 "claimed": true, 00:13:54.434 "claim_type": "exclusive_write", 00:13:54.434 "zoned": false, 00:13:54.434 "supported_io_types": { 00:13:54.434 "read": true, 00:13:54.434 "write": true, 00:13:54.434 "unmap": true, 00:13:54.434 "flush": true, 00:13:54.434 "reset": true, 00:13:54.434 "nvme_admin": false, 00:13:54.434 "nvme_io": false, 00:13:54.434 "nvme_io_md": false, 00:13:54.435 "write_zeroes": true, 00:13:54.435 "zcopy": true, 00:13:54.435 "get_zone_info": false, 00:13:54.435 "zone_management": false, 00:13:54.435 "zone_append": false, 00:13:54.435 "compare": false, 00:13:54.435 "compare_and_write": false, 00:13:54.435 "abort": true, 00:13:54.435 "seek_hole": false, 00:13:54.435 "seek_data": false, 00:13:54.435 "copy": true, 00:13:54.435 "nvme_iov_md": false 00:13:54.435 }, 00:13:54.435 "memory_domains": [ 00:13:54.435 { 00:13:54.435 "dma_device_id": "system", 00:13:54.435 "dma_device_type": 1 00:13:54.435 }, 00:13:54.435 { 00:13:54.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.435 "dma_device_type": 2 00:13:54.435 } 00:13:54.435 ], 00:13:54.435 "driver_specific": {} 00:13:54.435 } 00:13:54.435 ] 00:13:54.435 04:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.435 04:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:54.435 04:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:54.435 04:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:54.435 04:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:54.435 04:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:54.435 04:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:54.435 04:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:54.435 04:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.435 04:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.435 04:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.435 04:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.435 04:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.435 04:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.435 04:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.435 04:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.435 04:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.435 04:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.435 "name": "Existed_Raid", 00:13:54.435 "uuid": "c2ec74b9-6c65-4cd3-a5b9-fc571444a672", 00:13:54.435 "strip_size_kb": 64, 00:13:54.435 "state": "configuring", 00:13:54.435 "raid_level": "raid5f", 00:13:54.435 "superblock": true, 00:13:54.435 "num_base_bdevs": 3, 00:13:54.435 "num_base_bdevs_discovered": 2, 00:13:54.435 "num_base_bdevs_operational": 3, 00:13:54.435 "base_bdevs_list": [ 00:13:54.435 { 00:13:54.435 "name": "BaseBdev1", 00:13:54.435 "uuid": "ffd37264-8a71-4247-befb-26a210b780eb", 00:13:54.435 "is_configured": true, 00:13:54.435 "data_offset": 2048, 00:13:54.435 "data_size": 63488 00:13:54.435 }, 00:13:54.435 { 00:13:54.435 "name": null, 00:13:54.435 "uuid": "5d6295c6-0237-498b-a0f5-ce3cacfd7ead", 00:13:54.435 "is_configured": false, 00:13:54.435 "data_offset": 0, 00:13:54.435 "data_size": 63488 00:13:54.435 }, 00:13:54.435 { 00:13:54.435 "name": "BaseBdev3", 00:13:54.435 "uuid": "924361a3-129e-454f-b52b-b4d7e1586a5d", 00:13:54.435 "is_configured": true, 00:13:54.435 "data_offset": 2048, 00:13:54.435 "data_size": 63488 00:13:54.435 } 00:13:54.435 ] 00:13:54.435 }' 00:13:54.435 04:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.435 04:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.004 04:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:55.004 04:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.004 04:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.004 04:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.004 04:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.004 04:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:55.004 04:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:55.004 04:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.004 04:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.004 [2024-12-13 04:29:54.826283] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:55.004 04:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.004 04:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:55.004 04:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.004 04:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:55.004 04:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:55.004 04:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.004 04:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:55.004 04:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.004 04:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.004 04:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.004 04:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.004 04:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.004 04:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.004 04:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.004 04:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.004 04:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.004 04:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.004 "name": "Existed_Raid", 00:13:55.004 "uuid": "c2ec74b9-6c65-4cd3-a5b9-fc571444a672", 00:13:55.004 "strip_size_kb": 64, 00:13:55.004 "state": "configuring", 00:13:55.004 "raid_level": "raid5f", 00:13:55.004 "superblock": true, 00:13:55.004 "num_base_bdevs": 3, 00:13:55.004 "num_base_bdevs_discovered": 1, 00:13:55.004 "num_base_bdevs_operational": 3, 00:13:55.004 "base_bdevs_list": [ 00:13:55.004 { 00:13:55.004 "name": "BaseBdev1", 00:13:55.004 "uuid": "ffd37264-8a71-4247-befb-26a210b780eb", 00:13:55.004 "is_configured": true, 00:13:55.004 "data_offset": 2048, 00:13:55.004 "data_size": 63488 00:13:55.004 }, 00:13:55.004 { 00:13:55.004 "name": null, 00:13:55.004 "uuid": "5d6295c6-0237-498b-a0f5-ce3cacfd7ead", 00:13:55.004 "is_configured": false, 00:13:55.004 "data_offset": 0, 00:13:55.004 "data_size": 63488 00:13:55.004 }, 00:13:55.004 { 00:13:55.004 "name": null, 00:13:55.004 "uuid": "924361a3-129e-454f-b52b-b4d7e1586a5d", 00:13:55.004 "is_configured": false, 00:13:55.004 "data_offset": 0, 00:13:55.004 "data_size": 63488 00:13:55.004 } 00:13:55.004 ] 00:13:55.004 }' 00:13:55.004 04:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.004 04:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.266 04:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.266 04:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:55.266 04:29:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.266 04:29:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.530 04:29:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.530 04:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:55.530 04:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:55.530 04:29:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.530 04:29:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.530 [2024-12-13 04:29:55.325431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:55.530 04:29:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.530 04:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:55.530 04:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.530 04:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:55.530 04:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:55.530 04:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.530 04:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:55.530 04:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.530 04:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.530 04:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.530 04:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.530 04:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.530 04:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.530 04:29:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.530 04:29:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.530 04:29:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.530 04:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.530 "name": "Existed_Raid", 00:13:55.530 "uuid": "c2ec74b9-6c65-4cd3-a5b9-fc571444a672", 00:13:55.530 "strip_size_kb": 64, 00:13:55.530 "state": "configuring", 00:13:55.530 "raid_level": "raid5f", 00:13:55.530 "superblock": true, 00:13:55.530 "num_base_bdevs": 3, 00:13:55.530 "num_base_bdevs_discovered": 2, 00:13:55.530 "num_base_bdevs_operational": 3, 00:13:55.530 "base_bdevs_list": [ 00:13:55.530 { 00:13:55.530 "name": "BaseBdev1", 00:13:55.530 "uuid": "ffd37264-8a71-4247-befb-26a210b780eb", 00:13:55.530 "is_configured": true, 00:13:55.530 "data_offset": 2048, 00:13:55.530 "data_size": 63488 00:13:55.530 }, 00:13:55.530 { 00:13:55.530 "name": null, 00:13:55.530 "uuid": "5d6295c6-0237-498b-a0f5-ce3cacfd7ead", 00:13:55.530 "is_configured": false, 00:13:55.530 "data_offset": 0, 00:13:55.530 "data_size": 63488 00:13:55.530 }, 00:13:55.530 { 00:13:55.530 "name": "BaseBdev3", 00:13:55.530 "uuid": "924361a3-129e-454f-b52b-b4d7e1586a5d", 00:13:55.530 "is_configured": true, 00:13:55.530 "data_offset": 2048, 00:13:55.530 "data_size": 63488 00:13:55.530 } 00:13:55.530 ] 00:13:55.530 }' 00:13:55.530 04:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.530 04:29:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.798 04:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.798 04:29:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.798 04:29:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.798 04:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:55.798 04:29:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.798 04:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:55.798 04:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:55.798 04:29:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.798 04:29:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.798 [2024-12-13 04:29:55.800624] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:56.074 04:29:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.074 04:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:56.074 04:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:56.074 04:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:56.074 04:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:56.074 04:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:56.074 04:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:56.074 04:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.074 04:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.074 04:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.074 04:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.074 04:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.074 04:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.074 04:29:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.074 04:29:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.074 04:29:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.074 04:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.074 "name": "Existed_Raid", 00:13:56.074 "uuid": "c2ec74b9-6c65-4cd3-a5b9-fc571444a672", 00:13:56.074 "strip_size_kb": 64, 00:13:56.074 "state": "configuring", 00:13:56.074 "raid_level": "raid5f", 00:13:56.074 "superblock": true, 00:13:56.074 "num_base_bdevs": 3, 00:13:56.074 "num_base_bdevs_discovered": 1, 00:13:56.074 "num_base_bdevs_operational": 3, 00:13:56.074 "base_bdevs_list": [ 00:13:56.074 { 00:13:56.074 "name": null, 00:13:56.074 "uuid": "ffd37264-8a71-4247-befb-26a210b780eb", 00:13:56.074 "is_configured": false, 00:13:56.074 "data_offset": 0, 00:13:56.074 "data_size": 63488 00:13:56.074 }, 00:13:56.074 { 00:13:56.074 "name": null, 00:13:56.074 "uuid": "5d6295c6-0237-498b-a0f5-ce3cacfd7ead", 00:13:56.074 "is_configured": false, 00:13:56.074 "data_offset": 0, 00:13:56.074 "data_size": 63488 00:13:56.074 }, 00:13:56.074 { 00:13:56.074 "name": "BaseBdev3", 00:13:56.075 "uuid": "924361a3-129e-454f-b52b-b4d7e1586a5d", 00:13:56.075 "is_configured": true, 00:13:56.075 "data_offset": 2048, 00:13:56.075 "data_size": 63488 00:13:56.075 } 00:13:56.075 ] 00:13:56.075 }' 00:13:56.075 04:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.075 04:29:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.344 04:29:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.344 04:29:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:56.344 04:29:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.344 04:29:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.344 04:29:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.344 04:29:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:56.344 04:29:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:56.344 04:29:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.344 04:29:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.344 [2024-12-13 04:29:56.335449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:56.344 04:29:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.344 04:29:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:56.344 04:29:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:56.344 04:29:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:56.344 04:29:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:56.344 04:29:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:56.344 04:29:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:56.344 04:29:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.344 04:29:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.344 04:29:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.344 04:29:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.344 04:29:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.344 04:29:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.344 04:29:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.344 04:29:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.604 04:29:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.604 04:29:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.604 "name": "Existed_Raid", 00:13:56.604 "uuid": "c2ec74b9-6c65-4cd3-a5b9-fc571444a672", 00:13:56.604 "strip_size_kb": 64, 00:13:56.604 "state": "configuring", 00:13:56.604 "raid_level": "raid5f", 00:13:56.604 "superblock": true, 00:13:56.604 "num_base_bdevs": 3, 00:13:56.604 "num_base_bdevs_discovered": 2, 00:13:56.604 "num_base_bdevs_operational": 3, 00:13:56.604 "base_bdevs_list": [ 00:13:56.604 { 00:13:56.604 "name": null, 00:13:56.604 "uuid": "ffd37264-8a71-4247-befb-26a210b780eb", 00:13:56.604 "is_configured": false, 00:13:56.605 "data_offset": 0, 00:13:56.605 "data_size": 63488 00:13:56.605 }, 00:13:56.605 { 00:13:56.605 "name": "BaseBdev2", 00:13:56.605 "uuid": "5d6295c6-0237-498b-a0f5-ce3cacfd7ead", 00:13:56.605 "is_configured": true, 00:13:56.605 "data_offset": 2048, 00:13:56.605 "data_size": 63488 00:13:56.605 }, 00:13:56.605 { 00:13:56.605 "name": "BaseBdev3", 00:13:56.605 "uuid": "924361a3-129e-454f-b52b-b4d7e1586a5d", 00:13:56.605 "is_configured": true, 00:13:56.605 "data_offset": 2048, 00:13:56.605 "data_size": 63488 00:13:56.605 } 00:13:56.605 ] 00:13:56.605 }' 00:13:56.605 04:29:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.605 04:29:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.865 04:29:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.865 04:29:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.865 04:29:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.865 04:29:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:56.865 04:29:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.865 04:29:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:56.865 04:29:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.865 04:29:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.865 04:29:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.865 04:29:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:57.125 04:29:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.125 04:29:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ffd37264-8a71-4247-befb-26a210b780eb 00:13:57.125 04:29:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.125 04:29:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.125 [2024-12-13 04:29:56.933559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:57.125 [2024-12-13 04:29:56.933742] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:13:57.125 [2024-12-13 04:29:56.933765] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:57.125 [2024-12-13 04:29:56.934045] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:13:57.125 NewBaseBdev 00:13:57.125 [2024-12-13 04:29:56.934492] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:13:57.125 [2024-12-13 04:29:56.934509] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:13:57.125 [2024-12-13 04:29:56.934626] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:57.125 04:29:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.125 04:29:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:57.125 04:29:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:57.125 04:29:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:57.125 04:29:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:57.125 04:29:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:57.125 04:29:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:57.125 04:29:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:57.125 04:29:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.125 04:29:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.125 04:29:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.125 04:29:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:57.125 04:29:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.125 04:29:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.125 [ 00:13:57.125 { 00:13:57.125 "name": "NewBaseBdev", 00:13:57.125 "aliases": [ 00:13:57.125 "ffd37264-8a71-4247-befb-26a210b780eb" 00:13:57.125 ], 00:13:57.125 "product_name": "Malloc disk", 00:13:57.125 "block_size": 512, 00:13:57.125 "num_blocks": 65536, 00:13:57.125 "uuid": "ffd37264-8a71-4247-befb-26a210b780eb", 00:13:57.125 "assigned_rate_limits": { 00:13:57.125 "rw_ios_per_sec": 0, 00:13:57.125 "rw_mbytes_per_sec": 0, 00:13:57.125 "r_mbytes_per_sec": 0, 00:13:57.125 "w_mbytes_per_sec": 0 00:13:57.125 }, 00:13:57.125 "claimed": true, 00:13:57.125 "claim_type": "exclusive_write", 00:13:57.125 "zoned": false, 00:13:57.125 "supported_io_types": { 00:13:57.125 "read": true, 00:13:57.125 "write": true, 00:13:57.125 "unmap": true, 00:13:57.125 "flush": true, 00:13:57.125 "reset": true, 00:13:57.125 "nvme_admin": false, 00:13:57.125 "nvme_io": false, 00:13:57.125 "nvme_io_md": false, 00:13:57.125 "write_zeroes": true, 00:13:57.125 "zcopy": true, 00:13:57.125 "get_zone_info": false, 00:13:57.125 "zone_management": false, 00:13:57.125 "zone_append": false, 00:13:57.125 "compare": false, 00:13:57.125 "compare_and_write": false, 00:13:57.125 "abort": true, 00:13:57.125 "seek_hole": false, 00:13:57.125 "seek_data": false, 00:13:57.125 "copy": true, 00:13:57.125 "nvme_iov_md": false 00:13:57.125 }, 00:13:57.125 "memory_domains": [ 00:13:57.125 { 00:13:57.125 "dma_device_id": "system", 00:13:57.125 "dma_device_type": 1 00:13:57.125 }, 00:13:57.125 { 00:13:57.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.125 "dma_device_type": 2 00:13:57.125 } 00:13:57.125 ], 00:13:57.125 "driver_specific": {} 00:13:57.125 } 00:13:57.125 ] 00:13:57.125 04:29:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.125 04:29:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:57.125 04:29:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:57.125 04:29:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:57.125 04:29:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:57.125 04:29:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:57.125 04:29:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.125 04:29:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:57.125 04:29:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.126 04:29:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.126 04:29:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.126 04:29:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.126 04:29:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.126 04:29:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.126 04:29:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.126 04:29:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.126 04:29:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.126 04:29:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.126 "name": "Existed_Raid", 00:13:57.126 "uuid": "c2ec74b9-6c65-4cd3-a5b9-fc571444a672", 00:13:57.126 "strip_size_kb": 64, 00:13:57.126 "state": "online", 00:13:57.126 "raid_level": "raid5f", 00:13:57.126 "superblock": true, 00:13:57.126 "num_base_bdevs": 3, 00:13:57.126 "num_base_bdevs_discovered": 3, 00:13:57.126 "num_base_bdevs_operational": 3, 00:13:57.126 "base_bdevs_list": [ 00:13:57.126 { 00:13:57.126 "name": "NewBaseBdev", 00:13:57.126 "uuid": "ffd37264-8a71-4247-befb-26a210b780eb", 00:13:57.126 "is_configured": true, 00:13:57.126 "data_offset": 2048, 00:13:57.126 "data_size": 63488 00:13:57.126 }, 00:13:57.126 { 00:13:57.126 "name": "BaseBdev2", 00:13:57.126 "uuid": "5d6295c6-0237-498b-a0f5-ce3cacfd7ead", 00:13:57.126 "is_configured": true, 00:13:57.126 "data_offset": 2048, 00:13:57.126 "data_size": 63488 00:13:57.126 }, 00:13:57.126 { 00:13:57.126 "name": "BaseBdev3", 00:13:57.126 "uuid": "924361a3-129e-454f-b52b-b4d7e1586a5d", 00:13:57.126 "is_configured": true, 00:13:57.126 "data_offset": 2048, 00:13:57.126 "data_size": 63488 00:13:57.126 } 00:13:57.126 ] 00:13:57.126 }' 00:13:57.126 04:29:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.126 04:29:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.695 04:29:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:57.695 04:29:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:57.696 04:29:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:57.696 04:29:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:57.696 04:29:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:57.696 04:29:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:57.696 04:29:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:57.696 04:29:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.696 04:29:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.696 04:29:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:57.696 [2024-12-13 04:29:57.452873] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:57.696 04:29:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.696 04:29:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:57.696 "name": "Existed_Raid", 00:13:57.696 "aliases": [ 00:13:57.696 "c2ec74b9-6c65-4cd3-a5b9-fc571444a672" 00:13:57.696 ], 00:13:57.696 "product_name": "Raid Volume", 00:13:57.696 "block_size": 512, 00:13:57.696 "num_blocks": 126976, 00:13:57.696 "uuid": "c2ec74b9-6c65-4cd3-a5b9-fc571444a672", 00:13:57.696 "assigned_rate_limits": { 00:13:57.696 "rw_ios_per_sec": 0, 00:13:57.696 "rw_mbytes_per_sec": 0, 00:13:57.696 "r_mbytes_per_sec": 0, 00:13:57.696 "w_mbytes_per_sec": 0 00:13:57.696 }, 00:13:57.696 "claimed": false, 00:13:57.696 "zoned": false, 00:13:57.696 "supported_io_types": { 00:13:57.696 "read": true, 00:13:57.696 "write": true, 00:13:57.696 "unmap": false, 00:13:57.696 "flush": false, 00:13:57.696 "reset": true, 00:13:57.696 "nvme_admin": false, 00:13:57.696 "nvme_io": false, 00:13:57.696 "nvme_io_md": false, 00:13:57.696 "write_zeroes": true, 00:13:57.696 "zcopy": false, 00:13:57.696 "get_zone_info": false, 00:13:57.696 "zone_management": false, 00:13:57.696 "zone_append": false, 00:13:57.696 "compare": false, 00:13:57.696 "compare_and_write": false, 00:13:57.696 "abort": false, 00:13:57.696 "seek_hole": false, 00:13:57.696 "seek_data": false, 00:13:57.696 "copy": false, 00:13:57.696 "nvme_iov_md": false 00:13:57.696 }, 00:13:57.696 "driver_specific": { 00:13:57.696 "raid": { 00:13:57.696 "uuid": "c2ec74b9-6c65-4cd3-a5b9-fc571444a672", 00:13:57.696 "strip_size_kb": 64, 00:13:57.696 "state": "online", 00:13:57.696 "raid_level": "raid5f", 00:13:57.696 "superblock": true, 00:13:57.696 "num_base_bdevs": 3, 00:13:57.696 "num_base_bdevs_discovered": 3, 00:13:57.696 "num_base_bdevs_operational": 3, 00:13:57.696 "base_bdevs_list": [ 00:13:57.696 { 00:13:57.696 "name": "NewBaseBdev", 00:13:57.696 "uuid": "ffd37264-8a71-4247-befb-26a210b780eb", 00:13:57.696 "is_configured": true, 00:13:57.696 "data_offset": 2048, 00:13:57.696 "data_size": 63488 00:13:57.696 }, 00:13:57.696 { 00:13:57.696 "name": "BaseBdev2", 00:13:57.696 "uuid": "5d6295c6-0237-498b-a0f5-ce3cacfd7ead", 00:13:57.696 "is_configured": true, 00:13:57.696 "data_offset": 2048, 00:13:57.696 "data_size": 63488 00:13:57.696 }, 00:13:57.696 { 00:13:57.696 "name": "BaseBdev3", 00:13:57.696 "uuid": "924361a3-129e-454f-b52b-b4d7e1586a5d", 00:13:57.696 "is_configured": true, 00:13:57.696 "data_offset": 2048, 00:13:57.696 "data_size": 63488 00:13:57.696 } 00:13:57.696 ] 00:13:57.696 } 00:13:57.696 } 00:13:57.696 }' 00:13:57.696 04:29:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:57.696 04:29:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:57.696 BaseBdev2 00:13:57.696 BaseBdev3' 00:13:57.696 04:29:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:57.696 04:29:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:57.696 04:29:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:57.696 04:29:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:57.696 04:29:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:57.696 04:29:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.696 04:29:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.696 04:29:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.696 04:29:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:57.696 04:29:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:57.696 04:29:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:57.696 04:29:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:57.696 04:29:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:57.696 04:29:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.696 04:29:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.696 04:29:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.696 04:29:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:57.696 04:29:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:57.696 04:29:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:57.696 04:29:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:57.696 04:29:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.696 04:29:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.696 04:29:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:57.956 04:29:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.956 04:29:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:57.956 04:29:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:57.956 04:29:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:57.956 04:29:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.956 04:29:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.956 [2024-12-13 04:29:57.752270] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:57.956 [2024-12-13 04:29:57.752297] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:57.956 [2024-12-13 04:29:57.752365] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:57.956 [2024-12-13 04:29:57.752655] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:57.956 [2024-12-13 04:29:57.752673] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:13:57.956 04:29:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.956 04:29:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 92793 00:13:57.956 04:29:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 92793 ']' 00:13:57.956 04:29:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 92793 00:13:57.956 04:29:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:57.956 04:29:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:57.957 04:29:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92793 00:13:57.957 killing process with pid 92793 00:13:57.957 04:29:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:57.957 04:29:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:57.957 04:29:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92793' 00:13:57.957 04:29:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 92793 00:13:57.957 [2024-12-13 04:29:57.803481] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:57.957 04:29:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 92793 00:13:57.957 [2024-12-13 04:29:57.863568] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:58.217 04:29:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:58.217 00:13:58.217 real 0m9.133s 00:13:58.217 user 0m15.296s 00:13:58.217 sys 0m2.011s 00:13:58.217 04:29:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:58.217 04:29:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.217 ************************************ 00:13:58.217 END TEST raid5f_state_function_test_sb 00:13:58.217 ************************************ 00:13:58.477 04:29:58 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:13:58.477 04:29:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:58.477 04:29:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:58.477 04:29:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:58.477 ************************************ 00:13:58.477 START TEST raid5f_superblock_test 00:13:58.477 ************************************ 00:13:58.477 04:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:13:58.477 04:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:13:58.477 04:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:13:58.477 04:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:58.477 04:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:58.477 04:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:58.477 04:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:58.477 04:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:58.477 04:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:58.477 04:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:58.477 04:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:58.477 04:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:58.477 04:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:58.477 04:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:58.477 04:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:13:58.477 04:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:58.477 04:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:58.477 04:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=93397 00:13:58.477 04:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:58.477 04:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 93397 00:13:58.477 04:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 93397 ']' 00:13:58.477 04:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.477 04:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:58.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.477 04:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.477 04:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:58.477 04:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.477 [2024-12-13 04:29:58.364732] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:13:58.477 [2024-12-13 04:29:58.364856] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93397 ] 00:13:58.737 [2024-12-13 04:29:58.496905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.737 [2024-12-13 04:29:58.535365] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.737 [2024-12-13 04:29:58.612414] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:58.737 [2024-12-13 04:29:58.612465] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.309 malloc1 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.309 [2024-12-13 04:29:59.218561] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:59.309 [2024-12-13 04:29:59.218624] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.309 [2024-12-13 04:29:59.218662] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:13:59.309 [2024-12-13 04:29:59.218678] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.309 [2024-12-13 04:29:59.221082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.309 [2024-12-13 04:29:59.221122] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:59.309 pt1 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.309 malloc2 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.309 [2024-12-13 04:29:59.253267] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:59.309 [2024-12-13 04:29:59.253324] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.309 [2024-12-13 04:29:59.253343] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:59.309 [2024-12-13 04:29:59.253354] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.309 [2024-12-13 04:29:59.255719] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.309 [2024-12-13 04:29:59.255753] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:59.309 pt2 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.309 malloc3 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.309 [2024-12-13 04:29:59.287868] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:59.309 [2024-12-13 04:29:59.287922] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.309 [2024-12-13 04:29:59.287943] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:59.309 [2024-12-13 04:29:59.287955] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.309 [2024-12-13 04:29:59.290350] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.309 [2024-12-13 04:29:59.290384] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:59.309 pt3 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.309 [2024-12-13 04:29:59.299902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:59.309 [2024-12-13 04:29:59.302032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:59.309 [2024-12-13 04:29:59.302090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:59.309 [2024-12-13 04:29:59.302244] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:13:59.309 [2024-12-13 04:29:59.302262] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:59.309 [2024-12-13 04:29:59.302566] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:13:59.309 [2024-12-13 04:29:59.303044] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:13:59.309 [2024-12-13 04:29:59.303065] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:13:59.309 [2024-12-13 04:29:59.303208] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.309 04:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.570 04:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.570 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.570 "name": "raid_bdev1", 00:13:59.570 "uuid": "246067b1-cace-48ad-beee-bed09f694314", 00:13:59.570 "strip_size_kb": 64, 00:13:59.570 "state": "online", 00:13:59.570 "raid_level": "raid5f", 00:13:59.570 "superblock": true, 00:13:59.570 "num_base_bdevs": 3, 00:13:59.570 "num_base_bdevs_discovered": 3, 00:13:59.570 "num_base_bdevs_operational": 3, 00:13:59.570 "base_bdevs_list": [ 00:13:59.570 { 00:13:59.570 "name": "pt1", 00:13:59.570 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:59.570 "is_configured": true, 00:13:59.570 "data_offset": 2048, 00:13:59.570 "data_size": 63488 00:13:59.570 }, 00:13:59.570 { 00:13:59.570 "name": "pt2", 00:13:59.570 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:59.570 "is_configured": true, 00:13:59.570 "data_offset": 2048, 00:13:59.570 "data_size": 63488 00:13:59.570 }, 00:13:59.570 { 00:13:59.570 "name": "pt3", 00:13:59.570 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:59.570 "is_configured": true, 00:13:59.570 "data_offset": 2048, 00:13:59.570 "data_size": 63488 00:13:59.570 } 00:13:59.570 ] 00:13:59.570 }' 00:13:59.570 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.570 04:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.830 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:59.830 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:59.830 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:59.830 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:59.830 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:59.830 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:59.830 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:59.830 04:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.830 04:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.830 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:59.830 [2024-12-13 04:29:59.800774] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:59.830 04:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.830 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:59.830 "name": "raid_bdev1", 00:13:59.830 "aliases": [ 00:13:59.830 "246067b1-cace-48ad-beee-bed09f694314" 00:13:59.830 ], 00:13:59.830 "product_name": "Raid Volume", 00:13:59.830 "block_size": 512, 00:13:59.830 "num_blocks": 126976, 00:13:59.830 "uuid": "246067b1-cace-48ad-beee-bed09f694314", 00:13:59.830 "assigned_rate_limits": { 00:13:59.830 "rw_ios_per_sec": 0, 00:13:59.831 "rw_mbytes_per_sec": 0, 00:13:59.831 "r_mbytes_per_sec": 0, 00:13:59.831 "w_mbytes_per_sec": 0 00:13:59.831 }, 00:13:59.831 "claimed": false, 00:13:59.831 "zoned": false, 00:13:59.831 "supported_io_types": { 00:13:59.831 "read": true, 00:13:59.831 "write": true, 00:13:59.831 "unmap": false, 00:13:59.831 "flush": false, 00:13:59.831 "reset": true, 00:13:59.831 "nvme_admin": false, 00:13:59.831 "nvme_io": false, 00:13:59.831 "nvme_io_md": false, 00:13:59.831 "write_zeroes": true, 00:13:59.831 "zcopy": false, 00:13:59.831 "get_zone_info": false, 00:13:59.831 "zone_management": false, 00:13:59.831 "zone_append": false, 00:13:59.831 "compare": false, 00:13:59.831 "compare_and_write": false, 00:13:59.831 "abort": false, 00:13:59.831 "seek_hole": false, 00:13:59.831 "seek_data": false, 00:13:59.831 "copy": false, 00:13:59.831 "nvme_iov_md": false 00:13:59.831 }, 00:13:59.831 "driver_specific": { 00:13:59.831 "raid": { 00:13:59.831 "uuid": "246067b1-cace-48ad-beee-bed09f694314", 00:13:59.831 "strip_size_kb": 64, 00:13:59.831 "state": "online", 00:13:59.831 "raid_level": "raid5f", 00:13:59.831 "superblock": true, 00:13:59.831 "num_base_bdevs": 3, 00:13:59.831 "num_base_bdevs_discovered": 3, 00:13:59.831 "num_base_bdevs_operational": 3, 00:13:59.831 "base_bdevs_list": [ 00:13:59.831 { 00:13:59.831 "name": "pt1", 00:13:59.831 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:59.831 "is_configured": true, 00:13:59.831 "data_offset": 2048, 00:13:59.831 "data_size": 63488 00:13:59.831 }, 00:13:59.831 { 00:13:59.831 "name": "pt2", 00:13:59.831 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:59.831 "is_configured": true, 00:13:59.831 "data_offset": 2048, 00:13:59.831 "data_size": 63488 00:13:59.831 }, 00:13:59.831 { 00:13:59.831 "name": "pt3", 00:13:59.831 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:59.831 "is_configured": true, 00:13:59.831 "data_offset": 2048, 00:13:59.831 "data_size": 63488 00:13:59.831 } 00:13:59.831 ] 00:13:59.831 } 00:13:59.831 } 00:13:59.831 }' 00:13:59.831 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:00.091 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:00.091 pt2 00:14:00.091 pt3' 00:14:00.091 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:00.091 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:00.091 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:00.091 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:00.091 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:00.091 04:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.091 04:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.091 04:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.091 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:00.091 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:00.091 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:00.091 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:00.091 04:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:00.091 04:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.091 04:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.091 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.091 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:00.091 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:00.091 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:00.091 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:00.091 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:00.091 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.091 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.091 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.091 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:00.091 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:00.091 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:00.091 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:00.091 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.091 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.091 [2024-12-13 04:30:00.076708] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:00.091 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.091 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=246067b1-cace-48ad-beee-bed09f694314 00:14:00.091 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 246067b1-cace-48ad-beee-bed09f694314 ']' 00:14:00.091 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:00.091 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.091 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.091 [2024-12-13 04:30:00.104563] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:00.091 [2024-12-13 04:30:00.104586] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:00.091 [2024-12-13 04:30:00.104663] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:00.091 [2024-12-13 04:30:00.104728] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:00.091 [2024-12-13 04:30:00.104741] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.353 [2024-12-13 04:30:00.252594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:00.353 [2024-12-13 04:30:00.254686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:00.353 [2024-12-13 04:30:00.254728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:00.353 [2024-12-13 04:30:00.254772] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:00.353 [2024-12-13 04:30:00.254808] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:00.353 [2024-12-13 04:30:00.254827] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:00.353 [2024-12-13 04:30:00.254839] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:00.353 [2024-12-13 04:30:00.254851] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:14:00.353 request: 00:14:00.353 { 00:14:00.353 "name": "raid_bdev1", 00:14:00.353 "raid_level": "raid5f", 00:14:00.353 "base_bdevs": [ 00:14:00.353 "malloc1", 00:14:00.353 "malloc2", 00:14:00.353 "malloc3" 00:14:00.353 ], 00:14:00.353 "strip_size_kb": 64, 00:14:00.353 "superblock": false, 00:14:00.353 "method": "bdev_raid_create", 00:14:00.353 "req_id": 1 00:14:00.353 } 00:14:00.353 Got JSON-RPC error response 00:14:00.353 response: 00:14:00.353 { 00:14:00.353 "code": -17, 00:14:00.353 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:00.353 } 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.353 [2024-12-13 04:30:00.300560] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:00.353 [2024-12-13 04:30:00.300604] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.353 [2024-12-13 04:30:00.300622] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:00.353 [2024-12-13 04:30:00.300633] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.353 [2024-12-13 04:30:00.302988] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.353 [2024-12-13 04:30:00.303026] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:00.353 [2024-12-13 04:30:00.303079] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:00.353 [2024-12-13 04:30:00.303121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:00.353 pt1 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.353 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.353 "name": "raid_bdev1", 00:14:00.353 "uuid": "246067b1-cace-48ad-beee-bed09f694314", 00:14:00.353 "strip_size_kb": 64, 00:14:00.353 "state": "configuring", 00:14:00.353 "raid_level": "raid5f", 00:14:00.353 "superblock": true, 00:14:00.354 "num_base_bdevs": 3, 00:14:00.354 "num_base_bdevs_discovered": 1, 00:14:00.354 "num_base_bdevs_operational": 3, 00:14:00.354 "base_bdevs_list": [ 00:14:00.354 { 00:14:00.354 "name": "pt1", 00:14:00.354 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:00.354 "is_configured": true, 00:14:00.354 "data_offset": 2048, 00:14:00.354 "data_size": 63488 00:14:00.354 }, 00:14:00.354 { 00:14:00.354 "name": null, 00:14:00.354 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:00.354 "is_configured": false, 00:14:00.354 "data_offset": 2048, 00:14:00.354 "data_size": 63488 00:14:00.354 }, 00:14:00.354 { 00:14:00.354 "name": null, 00:14:00.354 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:00.354 "is_configured": false, 00:14:00.354 "data_offset": 2048, 00:14:00.354 "data_size": 63488 00:14:00.354 } 00:14:00.354 ] 00:14:00.354 }' 00:14:00.354 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.354 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.923 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:14:00.923 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:00.923 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.923 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.923 [2024-12-13 04:30:00.776572] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:00.923 [2024-12-13 04:30:00.776623] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.923 [2024-12-13 04:30:00.776640] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:00.923 [2024-12-13 04:30:00.776653] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.923 [2024-12-13 04:30:00.776996] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.923 [2024-12-13 04:30:00.777021] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:00.923 [2024-12-13 04:30:00.777073] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:00.923 [2024-12-13 04:30:00.777092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:00.923 pt2 00:14:00.923 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.923 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:00.923 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.923 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.923 [2024-12-13 04:30:00.788577] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:00.923 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.923 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:00.923 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.923 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:00.923 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:00.923 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.923 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:00.923 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.923 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.923 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.923 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.923 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.923 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.923 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.923 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.923 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.923 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.923 "name": "raid_bdev1", 00:14:00.923 "uuid": "246067b1-cace-48ad-beee-bed09f694314", 00:14:00.923 "strip_size_kb": 64, 00:14:00.923 "state": "configuring", 00:14:00.923 "raid_level": "raid5f", 00:14:00.923 "superblock": true, 00:14:00.923 "num_base_bdevs": 3, 00:14:00.923 "num_base_bdevs_discovered": 1, 00:14:00.923 "num_base_bdevs_operational": 3, 00:14:00.923 "base_bdevs_list": [ 00:14:00.923 { 00:14:00.923 "name": "pt1", 00:14:00.923 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:00.923 "is_configured": true, 00:14:00.923 "data_offset": 2048, 00:14:00.923 "data_size": 63488 00:14:00.923 }, 00:14:00.923 { 00:14:00.923 "name": null, 00:14:00.923 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:00.923 "is_configured": false, 00:14:00.923 "data_offset": 0, 00:14:00.923 "data_size": 63488 00:14:00.923 }, 00:14:00.923 { 00:14:00.923 "name": null, 00:14:00.923 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:00.923 "is_configured": false, 00:14:00.923 "data_offset": 2048, 00:14:00.923 "data_size": 63488 00:14:00.923 } 00:14:00.923 ] 00:14:00.923 }' 00:14:00.923 04:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.923 04:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.493 04:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:01.493 04:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:01.493 04:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:01.493 04:30:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.493 04:30:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.493 [2024-12-13 04:30:01.228568] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:01.493 [2024-12-13 04:30:01.228611] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.493 [2024-12-13 04:30:01.228627] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:01.493 [2024-12-13 04:30:01.228636] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.493 [2024-12-13 04:30:01.228966] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.493 [2024-12-13 04:30:01.228988] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:01.493 [2024-12-13 04:30:01.229040] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:01.493 [2024-12-13 04:30:01.229056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:01.493 pt2 00:14:01.493 04:30:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.493 04:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:01.493 04:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:01.493 04:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:01.493 04:30:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.493 04:30:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.493 [2024-12-13 04:30:01.240564] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:01.493 [2024-12-13 04:30:01.240602] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.493 [2024-12-13 04:30:01.240620] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:01.493 [2024-12-13 04:30:01.240627] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.493 [2024-12-13 04:30:01.240952] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.493 [2024-12-13 04:30:01.240976] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:01.493 [2024-12-13 04:30:01.241024] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:01.493 [2024-12-13 04:30:01.241040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:01.493 [2024-12-13 04:30:01.241131] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:14:01.494 [2024-12-13 04:30:01.241156] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:01.494 [2024-12-13 04:30:01.241392] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:14:01.494 [2024-12-13 04:30:01.241818] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:14:01.494 [2024-12-13 04:30:01.241839] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:14:01.494 [2024-12-13 04:30:01.241934] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:01.494 pt3 00:14:01.494 04:30:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.494 04:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:01.494 04:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:01.494 04:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:01.494 04:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:01.494 04:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:01.494 04:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:01.494 04:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:01.494 04:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:01.494 04:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.494 04:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.494 04:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.494 04:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.494 04:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.494 04:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.494 04:30:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.494 04:30:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.494 04:30:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.494 04:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.494 "name": "raid_bdev1", 00:14:01.494 "uuid": "246067b1-cace-48ad-beee-bed09f694314", 00:14:01.494 "strip_size_kb": 64, 00:14:01.494 "state": "online", 00:14:01.494 "raid_level": "raid5f", 00:14:01.494 "superblock": true, 00:14:01.494 "num_base_bdevs": 3, 00:14:01.494 "num_base_bdevs_discovered": 3, 00:14:01.494 "num_base_bdevs_operational": 3, 00:14:01.494 "base_bdevs_list": [ 00:14:01.494 { 00:14:01.494 "name": "pt1", 00:14:01.494 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:01.494 "is_configured": true, 00:14:01.494 "data_offset": 2048, 00:14:01.494 "data_size": 63488 00:14:01.494 }, 00:14:01.494 { 00:14:01.494 "name": "pt2", 00:14:01.494 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:01.494 "is_configured": true, 00:14:01.494 "data_offset": 2048, 00:14:01.494 "data_size": 63488 00:14:01.494 }, 00:14:01.494 { 00:14:01.494 "name": "pt3", 00:14:01.494 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:01.494 "is_configured": true, 00:14:01.494 "data_offset": 2048, 00:14:01.494 "data_size": 63488 00:14:01.494 } 00:14:01.494 ] 00:14:01.494 }' 00:14:01.494 04:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.494 04:30:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.754 04:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:01.754 04:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:01.754 04:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:01.754 04:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:01.754 04:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:01.754 04:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:01.754 04:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:01.754 04:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:01.754 04:30:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.754 04:30:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.754 [2024-12-13 04:30:01.732723] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:01.754 04:30:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.754 04:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:01.754 "name": "raid_bdev1", 00:14:01.754 "aliases": [ 00:14:01.754 "246067b1-cace-48ad-beee-bed09f694314" 00:14:01.754 ], 00:14:01.754 "product_name": "Raid Volume", 00:14:01.754 "block_size": 512, 00:14:01.754 "num_blocks": 126976, 00:14:01.754 "uuid": "246067b1-cace-48ad-beee-bed09f694314", 00:14:01.754 "assigned_rate_limits": { 00:14:01.754 "rw_ios_per_sec": 0, 00:14:01.754 "rw_mbytes_per_sec": 0, 00:14:01.754 "r_mbytes_per_sec": 0, 00:14:01.754 "w_mbytes_per_sec": 0 00:14:01.754 }, 00:14:01.754 "claimed": false, 00:14:01.754 "zoned": false, 00:14:01.754 "supported_io_types": { 00:14:01.754 "read": true, 00:14:01.754 "write": true, 00:14:01.754 "unmap": false, 00:14:01.754 "flush": false, 00:14:01.754 "reset": true, 00:14:01.754 "nvme_admin": false, 00:14:01.754 "nvme_io": false, 00:14:01.754 "nvme_io_md": false, 00:14:01.754 "write_zeroes": true, 00:14:01.754 "zcopy": false, 00:14:01.754 "get_zone_info": false, 00:14:01.754 "zone_management": false, 00:14:01.754 "zone_append": false, 00:14:01.754 "compare": false, 00:14:01.754 "compare_and_write": false, 00:14:01.754 "abort": false, 00:14:01.754 "seek_hole": false, 00:14:01.754 "seek_data": false, 00:14:01.754 "copy": false, 00:14:01.754 "nvme_iov_md": false 00:14:01.754 }, 00:14:01.754 "driver_specific": { 00:14:01.754 "raid": { 00:14:01.754 "uuid": "246067b1-cace-48ad-beee-bed09f694314", 00:14:01.754 "strip_size_kb": 64, 00:14:01.754 "state": "online", 00:14:01.754 "raid_level": "raid5f", 00:14:01.754 "superblock": true, 00:14:01.754 "num_base_bdevs": 3, 00:14:01.754 "num_base_bdevs_discovered": 3, 00:14:01.754 "num_base_bdevs_operational": 3, 00:14:01.754 "base_bdevs_list": [ 00:14:01.754 { 00:14:01.754 "name": "pt1", 00:14:01.754 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:01.754 "is_configured": true, 00:14:01.754 "data_offset": 2048, 00:14:01.754 "data_size": 63488 00:14:01.754 }, 00:14:01.754 { 00:14:01.754 "name": "pt2", 00:14:01.754 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:01.754 "is_configured": true, 00:14:01.754 "data_offset": 2048, 00:14:01.754 "data_size": 63488 00:14:01.754 }, 00:14:01.754 { 00:14:01.754 "name": "pt3", 00:14:01.754 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:01.754 "is_configured": true, 00:14:01.754 "data_offset": 2048, 00:14:01.754 "data_size": 63488 00:14:01.754 } 00:14:01.754 ] 00:14:01.754 } 00:14:01.754 } 00:14:01.754 }' 00:14:01.754 04:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:02.014 04:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:02.014 pt2 00:14:02.014 pt3' 00:14:02.014 04:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:02.014 04:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:02.014 04:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:02.014 04:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:02.014 04:30:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.014 04:30:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.014 04:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:02.014 04:30:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.014 04:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:02.014 04:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:02.014 04:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:02.014 04:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:02.014 04:30:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.014 04:30:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.014 04:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:02.014 04:30:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.014 04:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:02.014 04:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:02.014 04:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:02.014 04:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:02.014 04:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:02.014 04:30:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.014 04:30:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.014 04:30:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.014 04:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:02.014 04:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:02.014 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:02.014 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:02.015 04:30:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.015 04:30:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.015 [2024-12-13 04:30:02.012690] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:02.275 04:30:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.275 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 246067b1-cace-48ad-beee-bed09f694314 '!=' 246067b1-cace-48ad-beee-bed09f694314 ']' 00:14:02.275 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:14:02.275 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:02.275 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:02.275 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:02.275 04:30:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.275 04:30:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.275 [2024-12-13 04:30:02.040599] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:02.275 04:30:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.275 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:02.275 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.275 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.275 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:02.275 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:02.275 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:02.275 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.275 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.275 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.275 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.275 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.275 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.275 04:30:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.275 04:30:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.275 04:30:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.275 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.275 "name": "raid_bdev1", 00:14:02.275 "uuid": "246067b1-cace-48ad-beee-bed09f694314", 00:14:02.275 "strip_size_kb": 64, 00:14:02.275 "state": "online", 00:14:02.275 "raid_level": "raid5f", 00:14:02.275 "superblock": true, 00:14:02.275 "num_base_bdevs": 3, 00:14:02.275 "num_base_bdevs_discovered": 2, 00:14:02.275 "num_base_bdevs_operational": 2, 00:14:02.275 "base_bdevs_list": [ 00:14:02.275 { 00:14:02.275 "name": null, 00:14:02.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.275 "is_configured": false, 00:14:02.275 "data_offset": 0, 00:14:02.275 "data_size": 63488 00:14:02.275 }, 00:14:02.275 { 00:14:02.275 "name": "pt2", 00:14:02.275 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:02.275 "is_configured": true, 00:14:02.275 "data_offset": 2048, 00:14:02.275 "data_size": 63488 00:14:02.275 }, 00:14:02.275 { 00:14:02.275 "name": "pt3", 00:14:02.275 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:02.275 "is_configured": true, 00:14:02.275 "data_offset": 2048, 00:14:02.275 "data_size": 63488 00:14:02.275 } 00:14:02.275 ] 00:14:02.275 }' 00:14:02.275 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.275 04:30:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.535 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:02.535 04:30:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.535 04:30:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.535 [2024-12-13 04:30:02.516561] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:02.535 [2024-12-13 04:30:02.516591] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:02.535 [2024-12-13 04:30:02.516636] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:02.535 [2024-12-13 04:30:02.516684] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:02.535 [2024-12-13 04:30:02.516692] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:14:02.535 04:30:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.535 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.535 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:02.535 04:30:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.535 04:30:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.535 04:30:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.795 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:02.795 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:02.795 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:02.795 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:02.795 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:02.795 04:30:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.795 04:30:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.795 04:30:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.795 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:02.795 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:02.795 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:02.795 04:30:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.795 04:30:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.795 04:30:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.795 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:02.795 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:02.795 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:02.795 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:02.795 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:02.795 04:30:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.795 04:30:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.795 [2024-12-13 04:30:02.600560] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:02.795 [2024-12-13 04:30:02.600601] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.795 [2024-12-13 04:30:02.600617] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:14:02.795 [2024-12-13 04:30:02.600625] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.795 [2024-12-13 04:30:02.602951] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.795 [2024-12-13 04:30:02.602984] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:02.795 [2024-12-13 04:30:02.603039] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:02.795 [2024-12-13 04:30:02.603066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:02.795 pt2 00:14:02.795 04:30:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.795 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:14:02.795 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.795 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:02.795 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:02.795 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:02.795 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:02.795 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.795 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.795 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.795 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.795 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.795 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.795 04:30:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.795 04:30:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.795 04:30:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.795 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.795 "name": "raid_bdev1", 00:14:02.795 "uuid": "246067b1-cace-48ad-beee-bed09f694314", 00:14:02.795 "strip_size_kb": 64, 00:14:02.795 "state": "configuring", 00:14:02.795 "raid_level": "raid5f", 00:14:02.795 "superblock": true, 00:14:02.795 "num_base_bdevs": 3, 00:14:02.795 "num_base_bdevs_discovered": 1, 00:14:02.795 "num_base_bdevs_operational": 2, 00:14:02.795 "base_bdevs_list": [ 00:14:02.795 { 00:14:02.795 "name": null, 00:14:02.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.795 "is_configured": false, 00:14:02.795 "data_offset": 2048, 00:14:02.795 "data_size": 63488 00:14:02.795 }, 00:14:02.795 { 00:14:02.795 "name": "pt2", 00:14:02.795 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:02.795 "is_configured": true, 00:14:02.795 "data_offset": 2048, 00:14:02.795 "data_size": 63488 00:14:02.795 }, 00:14:02.795 { 00:14:02.795 "name": null, 00:14:02.795 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:02.795 "is_configured": false, 00:14:02.795 "data_offset": 2048, 00:14:02.795 "data_size": 63488 00:14:02.795 } 00:14:02.795 ] 00:14:02.795 }' 00:14:02.795 04:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.795 04:30:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.364 04:30:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:03.364 04:30:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:03.364 04:30:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:14:03.364 04:30:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:03.364 04:30:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.364 04:30:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.364 [2024-12-13 04:30:03.080570] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:03.364 [2024-12-13 04:30:03.080614] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:03.364 [2024-12-13 04:30:03.080630] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:03.364 [2024-12-13 04:30:03.080638] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:03.364 [2024-12-13 04:30:03.080962] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:03.364 [2024-12-13 04:30:03.080985] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:03.364 [2024-12-13 04:30:03.081035] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:03.364 [2024-12-13 04:30:03.081051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:03.364 [2024-12-13 04:30:03.081124] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:14:03.364 [2024-12-13 04:30:03.081138] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:03.364 [2024-12-13 04:30:03.081385] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:14:03.364 [2024-12-13 04:30:03.081894] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:14:03.364 [2024-12-13 04:30:03.081917] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:14:03.364 [2024-12-13 04:30:03.082127] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:03.364 pt3 00:14:03.364 04:30:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.364 04:30:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:03.364 04:30:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:03.364 04:30:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:03.364 04:30:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:03.364 04:30:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:03.364 04:30:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:03.364 04:30:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.364 04:30:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.364 04:30:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.364 04:30:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.364 04:30:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.364 04:30:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.364 04:30:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.364 04:30:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.364 04:30:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.364 04:30:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.364 "name": "raid_bdev1", 00:14:03.364 "uuid": "246067b1-cace-48ad-beee-bed09f694314", 00:14:03.364 "strip_size_kb": 64, 00:14:03.364 "state": "online", 00:14:03.364 "raid_level": "raid5f", 00:14:03.364 "superblock": true, 00:14:03.364 "num_base_bdevs": 3, 00:14:03.364 "num_base_bdevs_discovered": 2, 00:14:03.364 "num_base_bdevs_operational": 2, 00:14:03.364 "base_bdevs_list": [ 00:14:03.364 { 00:14:03.364 "name": null, 00:14:03.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.364 "is_configured": false, 00:14:03.364 "data_offset": 2048, 00:14:03.364 "data_size": 63488 00:14:03.364 }, 00:14:03.364 { 00:14:03.364 "name": "pt2", 00:14:03.364 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:03.364 "is_configured": true, 00:14:03.364 "data_offset": 2048, 00:14:03.364 "data_size": 63488 00:14:03.364 }, 00:14:03.364 { 00:14:03.364 "name": "pt3", 00:14:03.364 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:03.364 "is_configured": true, 00:14:03.364 "data_offset": 2048, 00:14:03.364 "data_size": 63488 00:14:03.364 } 00:14:03.364 ] 00:14:03.364 }' 00:14:03.364 04:30:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.364 04:30:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.624 04:30:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:03.624 04:30:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.624 04:30:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.624 [2024-12-13 04:30:03.536567] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:03.624 [2024-12-13 04:30:03.536590] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:03.624 [2024-12-13 04:30:03.536643] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:03.624 [2024-12-13 04:30:03.536690] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:03.624 [2024-12-13 04:30:03.536701] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:14:03.624 04:30:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.624 04:30:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.624 04:30:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:03.624 04:30:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.624 04:30:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.624 04:30:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.624 04:30:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:03.624 04:30:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:03.624 04:30:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:14:03.624 04:30:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:14:03.624 04:30:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:14:03.624 04:30:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.625 04:30:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.625 04:30:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.625 04:30:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:03.625 04:30:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.625 04:30:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.625 [2024-12-13 04:30:03.608559] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:03.625 [2024-12-13 04:30:03.608659] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:03.625 [2024-12-13 04:30:03.608679] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:03.625 [2024-12-13 04:30:03.608690] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:03.625 [2024-12-13 04:30:03.611049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:03.625 [2024-12-13 04:30:03.611084] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:03.625 [2024-12-13 04:30:03.611140] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:03.625 [2024-12-13 04:30:03.611169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:03.625 [2024-12-13 04:30:03.611267] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:03.625 [2024-12-13 04:30:03.611286] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:03.625 [2024-12-13 04:30:03.611308] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:14:03.625 [2024-12-13 04:30:03.611348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:03.625 pt1 00:14:03.625 04:30:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.625 04:30:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:14:03.625 04:30:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:14:03.625 04:30:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:03.625 04:30:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:03.625 04:30:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:03.625 04:30:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:03.625 04:30:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:03.625 04:30:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.625 04:30:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.625 04:30:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.625 04:30:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.625 04:30:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.625 04:30:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.625 04:30:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.625 04:30:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.885 04:30:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.885 04:30:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.885 "name": "raid_bdev1", 00:14:03.885 "uuid": "246067b1-cace-48ad-beee-bed09f694314", 00:14:03.885 "strip_size_kb": 64, 00:14:03.885 "state": "configuring", 00:14:03.885 "raid_level": "raid5f", 00:14:03.885 "superblock": true, 00:14:03.885 "num_base_bdevs": 3, 00:14:03.885 "num_base_bdevs_discovered": 1, 00:14:03.885 "num_base_bdevs_operational": 2, 00:14:03.885 "base_bdevs_list": [ 00:14:03.885 { 00:14:03.885 "name": null, 00:14:03.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.885 "is_configured": false, 00:14:03.885 "data_offset": 2048, 00:14:03.885 "data_size": 63488 00:14:03.885 }, 00:14:03.885 { 00:14:03.885 "name": "pt2", 00:14:03.885 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:03.885 "is_configured": true, 00:14:03.885 "data_offset": 2048, 00:14:03.885 "data_size": 63488 00:14:03.885 }, 00:14:03.885 { 00:14:03.885 "name": null, 00:14:03.885 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:03.885 "is_configured": false, 00:14:03.885 "data_offset": 2048, 00:14:03.885 "data_size": 63488 00:14:03.885 } 00:14:03.885 ] 00:14:03.885 }' 00:14:03.885 04:30:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.885 04:30:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.145 04:30:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:04.145 04:30:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:04.145 04:30:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.145 04:30:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.145 04:30:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.405 04:30:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:04.405 04:30:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:04.405 04:30:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.405 04:30:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.405 [2024-12-13 04:30:04.168548] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:04.405 [2024-12-13 04:30:04.168654] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.405 [2024-12-13 04:30:04.168686] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:04.405 [2024-12-13 04:30:04.168717] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.405 [2024-12-13 04:30:04.169062] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.405 [2024-12-13 04:30:04.169124] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:04.405 [2024-12-13 04:30:04.169199] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:04.405 [2024-12-13 04:30:04.169250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:04.405 [2024-12-13 04:30:04.169356] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:14:04.405 [2024-12-13 04:30:04.169411] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:04.405 [2024-12-13 04:30:04.169694] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:14:04.405 [2024-12-13 04:30:04.170190] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:14:04.405 [2024-12-13 04:30:04.170241] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:14:04.405 [2024-12-13 04:30:04.170450] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:04.405 pt3 00:14:04.405 04:30:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.405 04:30:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:04.405 04:30:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:04.405 04:30:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:04.405 04:30:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:04.405 04:30:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:04.405 04:30:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:04.405 04:30:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.405 04:30:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.405 04:30:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.405 04:30:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.405 04:30:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.405 04:30:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.405 04:30:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.405 04:30:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.405 04:30:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.405 04:30:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.405 "name": "raid_bdev1", 00:14:04.405 "uuid": "246067b1-cace-48ad-beee-bed09f694314", 00:14:04.405 "strip_size_kb": 64, 00:14:04.405 "state": "online", 00:14:04.405 "raid_level": "raid5f", 00:14:04.405 "superblock": true, 00:14:04.405 "num_base_bdevs": 3, 00:14:04.405 "num_base_bdevs_discovered": 2, 00:14:04.405 "num_base_bdevs_operational": 2, 00:14:04.405 "base_bdevs_list": [ 00:14:04.405 { 00:14:04.405 "name": null, 00:14:04.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.405 "is_configured": false, 00:14:04.405 "data_offset": 2048, 00:14:04.405 "data_size": 63488 00:14:04.405 }, 00:14:04.405 { 00:14:04.405 "name": "pt2", 00:14:04.405 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:04.405 "is_configured": true, 00:14:04.405 "data_offset": 2048, 00:14:04.405 "data_size": 63488 00:14:04.405 }, 00:14:04.405 { 00:14:04.405 "name": "pt3", 00:14:04.405 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:04.405 "is_configured": true, 00:14:04.405 "data_offset": 2048, 00:14:04.405 "data_size": 63488 00:14:04.405 } 00:14:04.405 ] 00:14:04.405 }' 00:14:04.405 04:30:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.405 04:30:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.665 04:30:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:04.665 04:30:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.665 04:30:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.665 04:30:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:04.665 04:30:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.665 04:30:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:04.665 04:30:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:04.665 04:30:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.665 04:30:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.665 04:30:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:04.665 [2024-12-13 04:30:04.668740] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:04.925 04:30:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.925 04:30:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 246067b1-cace-48ad-beee-bed09f694314 '!=' 246067b1-cace-48ad-beee-bed09f694314 ']' 00:14:04.925 04:30:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 93397 00:14:04.925 04:30:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 93397 ']' 00:14:04.925 04:30:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 93397 00:14:04.925 04:30:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:14:04.925 04:30:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:04.925 04:30:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93397 00:14:04.925 04:30:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:04.925 04:30:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:04.925 04:30:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93397' 00:14:04.925 killing process with pid 93397 00:14:04.925 04:30:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 93397 00:14:04.925 [2024-12-13 04:30:04.754714] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:04.925 [2024-12-13 04:30:04.754772] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:04.925 [2024-12-13 04:30:04.754824] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:04.925 [2024-12-13 04:30:04.754832] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:14:04.925 04:30:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 93397 00:14:04.925 [2024-12-13 04:30:04.817388] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:05.186 04:30:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:05.186 00:14:05.186 real 0m6.871s 00:14:05.186 user 0m11.425s 00:14:05.186 sys 0m1.476s 00:14:05.186 04:30:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:05.186 ************************************ 00:14:05.186 END TEST raid5f_superblock_test 00:14:05.186 ************************************ 00:14:05.186 04:30:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.445 04:30:05 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:14:05.445 04:30:05 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:14:05.445 04:30:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:05.445 04:30:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:05.445 04:30:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:05.445 ************************************ 00:14:05.445 START TEST raid5f_rebuild_test 00:14:05.445 ************************************ 00:14:05.445 04:30:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:14:05.445 04:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:05.445 04:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:14:05.445 04:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:05.445 04:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:05.445 04:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:05.445 04:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:05.445 04:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:05.445 04:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:05.445 04:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:05.445 04:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:05.445 04:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:05.445 04:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:05.445 04:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:05.445 04:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:05.445 04:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:05.445 04:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:05.445 04:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:05.445 04:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:05.445 04:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:05.445 04:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:05.445 04:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:05.445 04:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:05.445 04:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:05.445 04:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:05.445 04:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:05.445 04:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:05.445 04:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:05.445 04:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:05.445 04:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=93830 00:14:05.446 04:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:05.446 04:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 93830 00:14:05.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.446 04:30:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 93830 ']' 00:14:05.446 04:30:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.446 04:30:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:05.446 04:30:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.446 04:30:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:05.446 04:30:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.446 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:05.446 Zero copy mechanism will not be used. 00:14:05.446 [2024-12-13 04:30:05.329781] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:14:05.446 [2024-12-13 04:30:05.329913] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93830 ] 00:14:05.705 [2024-12-13 04:30:05.484869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.705 [2024-12-13 04:30:05.522155] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.705 [2024-12-13 04:30:05.599471] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:05.705 [2024-12-13 04:30:05.599511] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:06.275 04:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:06.275 04:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:14:06.275 04:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:06.275 04:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:06.275 04:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.275 04:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.275 BaseBdev1_malloc 00:14:06.276 04:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.276 04:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:06.276 04:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.276 04:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.276 [2024-12-13 04:30:06.181576] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:06.276 [2024-12-13 04:30:06.181636] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.276 [2024-12-13 04:30:06.181675] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:14:06.276 [2024-12-13 04:30:06.181688] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.276 [2024-12-13 04:30:06.184208] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.276 [2024-12-13 04:30:06.184284] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:06.276 BaseBdev1 00:14:06.276 04:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.276 04:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:06.276 04:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:06.276 04:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.276 04:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.276 BaseBdev2_malloc 00:14:06.276 04:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.276 04:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:06.276 04:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.276 04:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.276 [2024-12-13 04:30:06.216269] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:06.276 [2024-12-13 04:30:06.216325] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.276 [2024-12-13 04:30:06.216352] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:06.276 [2024-12-13 04:30:06.216361] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.276 [2024-12-13 04:30:06.218603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.276 [2024-12-13 04:30:06.218681] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:06.276 BaseBdev2 00:14:06.276 04:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.276 04:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:06.276 04:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:06.276 04:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.276 04:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.276 BaseBdev3_malloc 00:14:06.276 04:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.276 04:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:06.276 04:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.276 04:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.276 [2024-12-13 04:30:06.250970] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:06.276 [2024-12-13 04:30:06.251026] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.276 [2024-12-13 04:30:06.251055] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:06.276 [2024-12-13 04:30:06.251064] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.276 [2024-12-13 04:30:06.253533] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.276 [2024-12-13 04:30:06.253566] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:06.276 BaseBdev3 00:14:06.276 04:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.276 04:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:06.276 04:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.276 04:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.536 spare_malloc 00:14:06.536 04:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.536 04:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:06.536 04:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.536 04:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.536 spare_delay 00:14:06.536 04:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.536 04:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:06.536 04:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.536 04:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.536 [2024-12-13 04:30:06.313798] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:06.536 [2024-12-13 04:30:06.313863] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.536 [2024-12-13 04:30:06.313898] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:06.536 [2024-12-13 04:30:06.313909] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.536 [2024-12-13 04:30:06.316731] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.536 [2024-12-13 04:30:06.316772] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:06.536 spare 00:14:06.536 04:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.536 04:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:14:06.536 04:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.536 04:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.536 [2024-12-13 04:30:06.325841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:06.536 [2024-12-13 04:30:06.328012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:06.536 [2024-12-13 04:30:06.328075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:06.536 [2024-12-13 04:30:06.328174] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:14:06.536 [2024-12-13 04:30:06.328199] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:06.536 [2024-12-13 04:30:06.328495] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:14:06.536 [2024-12-13 04:30:06.328931] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:14:06.536 [2024-12-13 04:30:06.328943] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:14:06.536 [2024-12-13 04:30:06.329067] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.536 04:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.536 04:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:06.536 04:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.537 04:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.537 04:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:06.537 04:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.537 04:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:06.537 04:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.537 04:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.537 04:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.537 04:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.537 04:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.537 04:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.537 04:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.537 04:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.537 04:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.537 04:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.537 "name": "raid_bdev1", 00:14:06.537 "uuid": "d4f71865-6630-4dc0-b383-22e04516d299", 00:14:06.537 "strip_size_kb": 64, 00:14:06.537 "state": "online", 00:14:06.537 "raid_level": "raid5f", 00:14:06.537 "superblock": false, 00:14:06.537 "num_base_bdevs": 3, 00:14:06.537 "num_base_bdevs_discovered": 3, 00:14:06.537 "num_base_bdevs_operational": 3, 00:14:06.537 "base_bdevs_list": [ 00:14:06.537 { 00:14:06.537 "name": "BaseBdev1", 00:14:06.537 "uuid": "9761fdc1-ca4c-52a4-9c6e-1ff8c9a16d25", 00:14:06.537 "is_configured": true, 00:14:06.537 "data_offset": 0, 00:14:06.537 "data_size": 65536 00:14:06.537 }, 00:14:06.537 { 00:14:06.537 "name": "BaseBdev2", 00:14:06.537 "uuid": "f332b1e7-54e1-5fa1-81ff-ffcc04117e21", 00:14:06.537 "is_configured": true, 00:14:06.537 "data_offset": 0, 00:14:06.537 "data_size": 65536 00:14:06.537 }, 00:14:06.537 { 00:14:06.537 "name": "BaseBdev3", 00:14:06.537 "uuid": "54e89053-0705-5479-a1bb-96707e41d9e6", 00:14:06.537 "is_configured": true, 00:14:06.537 "data_offset": 0, 00:14:06.537 "data_size": 65536 00:14:06.537 } 00:14:06.537 ] 00:14:06.537 }' 00:14:06.537 04:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.537 04:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.796 04:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:06.796 04:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:06.796 04:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.796 04:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.796 [2024-12-13 04:30:06.762796] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:06.796 04:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.796 04:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:14:06.796 04:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:06.796 04:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.796 04:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.796 04:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.057 04:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.057 04:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:07.057 04:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:07.057 04:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:07.057 04:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:07.057 04:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:07.057 04:30:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:07.057 04:30:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:07.057 04:30:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:07.057 04:30:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:07.057 04:30:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:07.057 04:30:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:07.057 04:30:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:07.057 04:30:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:07.057 04:30:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:07.057 [2024-12-13 04:30:07.034189] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:14:07.057 /dev/nbd0 00:14:07.316 04:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:07.316 04:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:07.316 04:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:07.316 04:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:07.316 04:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:07.316 04:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:07.316 04:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:07.317 04:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:07.317 04:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:07.317 04:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:07.317 04:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:07.317 1+0 records in 00:14:07.317 1+0 records out 00:14:07.317 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000569835 s, 7.2 MB/s 00:14:07.317 04:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.317 04:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:07.317 04:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.317 04:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:07.317 04:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:07.317 04:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:07.317 04:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:07.317 04:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:07.317 04:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:14:07.317 04:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:14:07.317 04:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:14:07.576 512+0 records in 00:14:07.576 512+0 records out 00:14:07.576 67108864 bytes (67 MB, 64 MiB) copied, 0.345146 s, 194 MB/s 00:14:07.576 04:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:07.576 04:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:07.576 04:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:07.576 04:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:07.576 04:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:07.576 04:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:07.576 04:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:07.836 04:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:07.836 04:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:07.836 04:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:07.836 04:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:07.836 04:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:07.836 04:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:07.836 [2024-12-13 04:30:07.675593] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:07.836 04:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:07.836 04:30:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:07.836 04:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:07.836 04:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.836 04:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.836 [2024-12-13 04:30:07.687584] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:07.836 04:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.836 04:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:07.836 04:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:07.836 04:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:07.837 04:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:07.837 04:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:07.837 04:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:07.837 04:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.837 04:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.837 04:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.837 04:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.837 04:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.837 04:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.837 04:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.837 04:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.837 04:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.837 04:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.837 "name": "raid_bdev1", 00:14:07.837 "uuid": "d4f71865-6630-4dc0-b383-22e04516d299", 00:14:07.837 "strip_size_kb": 64, 00:14:07.837 "state": "online", 00:14:07.837 "raid_level": "raid5f", 00:14:07.837 "superblock": false, 00:14:07.837 "num_base_bdevs": 3, 00:14:07.837 "num_base_bdevs_discovered": 2, 00:14:07.837 "num_base_bdevs_operational": 2, 00:14:07.837 "base_bdevs_list": [ 00:14:07.837 { 00:14:07.837 "name": null, 00:14:07.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.837 "is_configured": false, 00:14:07.837 "data_offset": 0, 00:14:07.837 "data_size": 65536 00:14:07.837 }, 00:14:07.837 { 00:14:07.837 "name": "BaseBdev2", 00:14:07.837 "uuid": "f332b1e7-54e1-5fa1-81ff-ffcc04117e21", 00:14:07.837 "is_configured": true, 00:14:07.837 "data_offset": 0, 00:14:07.837 "data_size": 65536 00:14:07.837 }, 00:14:07.837 { 00:14:07.837 "name": "BaseBdev3", 00:14:07.837 "uuid": "54e89053-0705-5479-a1bb-96707e41d9e6", 00:14:07.837 "is_configured": true, 00:14:07.837 "data_offset": 0, 00:14:07.837 "data_size": 65536 00:14:07.837 } 00:14:07.837 ] 00:14:07.837 }' 00:14:07.837 04:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.837 04:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.097 04:30:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:08.097 04:30:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.097 04:30:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.097 [2024-12-13 04:30:08.106863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:08.356 [2024-12-13 04:30:08.114764] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027cd0 00:14:08.356 04:30:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.356 04:30:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:08.356 [2024-12-13 04:30:08.117215] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:09.295 04:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:09.295 04:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:09.295 04:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:09.295 04:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:09.295 04:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:09.295 04:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.295 04:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.295 04:30:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.295 04:30:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.295 04:30:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.295 04:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:09.295 "name": "raid_bdev1", 00:14:09.295 "uuid": "d4f71865-6630-4dc0-b383-22e04516d299", 00:14:09.295 "strip_size_kb": 64, 00:14:09.295 "state": "online", 00:14:09.295 "raid_level": "raid5f", 00:14:09.295 "superblock": false, 00:14:09.295 "num_base_bdevs": 3, 00:14:09.295 "num_base_bdevs_discovered": 3, 00:14:09.295 "num_base_bdevs_operational": 3, 00:14:09.295 "process": { 00:14:09.295 "type": "rebuild", 00:14:09.295 "target": "spare", 00:14:09.295 "progress": { 00:14:09.295 "blocks": 20480, 00:14:09.295 "percent": 15 00:14:09.295 } 00:14:09.295 }, 00:14:09.295 "base_bdevs_list": [ 00:14:09.295 { 00:14:09.295 "name": "spare", 00:14:09.295 "uuid": "d21152ab-3394-5a55-be82-39e9c0d60d8e", 00:14:09.295 "is_configured": true, 00:14:09.295 "data_offset": 0, 00:14:09.295 "data_size": 65536 00:14:09.295 }, 00:14:09.295 { 00:14:09.295 "name": "BaseBdev2", 00:14:09.295 "uuid": "f332b1e7-54e1-5fa1-81ff-ffcc04117e21", 00:14:09.295 "is_configured": true, 00:14:09.295 "data_offset": 0, 00:14:09.295 "data_size": 65536 00:14:09.295 }, 00:14:09.295 { 00:14:09.295 "name": "BaseBdev3", 00:14:09.295 "uuid": "54e89053-0705-5479-a1bb-96707e41d9e6", 00:14:09.295 "is_configured": true, 00:14:09.295 "data_offset": 0, 00:14:09.295 "data_size": 65536 00:14:09.295 } 00:14:09.295 ] 00:14:09.295 }' 00:14:09.295 04:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:09.295 04:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:09.295 04:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:09.295 04:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:09.295 04:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:09.295 04:30:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.295 04:30:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.295 [2024-12-13 04:30:09.284828] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:09.555 [2024-12-13 04:30:09.325567] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:09.555 [2024-12-13 04:30:09.325637] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:09.555 [2024-12-13 04:30:09.325654] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:09.555 [2024-12-13 04:30:09.325664] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:09.555 04:30:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.555 04:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:09.555 04:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:09.555 04:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:09.555 04:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:09.555 04:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:09.555 04:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:09.555 04:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.555 04:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.555 04:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.555 04:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.555 04:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.555 04:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.555 04:30:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.555 04:30:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.555 04:30:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.555 04:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.555 "name": "raid_bdev1", 00:14:09.555 "uuid": "d4f71865-6630-4dc0-b383-22e04516d299", 00:14:09.555 "strip_size_kb": 64, 00:14:09.555 "state": "online", 00:14:09.555 "raid_level": "raid5f", 00:14:09.555 "superblock": false, 00:14:09.555 "num_base_bdevs": 3, 00:14:09.555 "num_base_bdevs_discovered": 2, 00:14:09.555 "num_base_bdevs_operational": 2, 00:14:09.555 "base_bdevs_list": [ 00:14:09.555 { 00:14:09.555 "name": null, 00:14:09.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.555 "is_configured": false, 00:14:09.555 "data_offset": 0, 00:14:09.555 "data_size": 65536 00:14:09.555 }, 00:14:09.555 { 00:14:09.555 "name": "BaseBdev2", 00:14:09.555 "uuid": "f332b1e7-54e1-5fa1-81ff-ffcc04117e21", 00:14:09.555 "is_configured": true, 00:14:09.555 "data_offset": 0, 00:14:09.555 "data_size": 65536 00:14:09.555 }, 00:14:09.555 { 00:14:09.555 "name": "BaseBdev3", 00:14:09.555 "uuid": "54e89053-0705-5479-a1bb-96707e41d9e6", 00:14:09.555 "is_configured": true, 00:14:09.555 "data_offset": 0, 00:14:09.555 "data_size": 65536 00:14:09.555 } 00:14:09.555 ] 00:14:09.555 }' 00:14:09.555 04:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.555 04:30:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.815 04:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:09.815 04:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:09.815 04:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:09.815 04:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:09.815 04:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:09.815 04:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.815 04:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.815 04:30:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.815 04:30:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.815 04:30:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.075 04:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:10.075 "name": "raid_bdev1", 00:14:10.075 "uuid": "d4f71865-6630-4dc0-b383-22e04516d299", 00:14:10.075 "strip_size_kb": 64, 00:14:10.075 "state": "online", 00:14:10.075 "raid_level": "raid5f", 00:14:10.075 "superblock": false, 00:14:10.075 "num_base_bdevs": 3, 00:14:10.075 "num_base_bdevs_discovered": 2, 00:14:10.075 "num_base_bdevs_operational": 2, 00:14:10.075 "base_bdevs_list": [ 00:14:10.075 { 00:14:10.075 "name": null, 00:14:10.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.075 "is_configured": false, 00:14:10.075 "data_offset": 0, 00:14:10.075 "data_size": 65536 00:14:10.075 }, 00:14:10.075 { 00:14:10.075 "name": "BaseBdev2", 00:14:10.075 "uuid": "f332b1e7-54e1-5fa1-81ff-ffcc04117e21", 00:14:10.075 "is_configured": true, 00:14:10.075 "data_offset": 0, 00:14:10.075 "data_size": 65536 00:14:10.075 }, 00:14:10.075 { 00:14:10.075 "name": "BaseBdev3", 00:14:10.075 "uuid": "54e89053-0705-5479-a1bb-96707e41d9e6", 00:14:10.075 "is_configured": true, 00:14:10.075 "data_offset": 0, 00:14:10.075 "data_size": 65536 00:14:10.075 } 00:14:10.075 ] 00:14:10.075 }' 00:14:10.075 04:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.075 04:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:10.075 04:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:10.075 04:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:10.075 04:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:10.075 04:30:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.075 04:30:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.075 [2024-12-13 04:30:09.950046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:10.075 [2024-12-13 04:30:09.955060] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027da0 00:14:10.075 04:30:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.075 04:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:10.075 [2024-12-13 04:30:09.957466] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:11.014 04:30:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:11.014 04:30:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:11.014 04:30:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:11.014 04:30:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:11.014 04:30:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:11.014 04:30:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.014 04:30:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.014 04:30:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.014 04:30:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.014 04:30:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.014 04:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:11.014 "name": "raid_bdev1", 00:14:11.014 "uuid": "d4f71865-6630-4dc0-b383-22e04516d299", 00:14:11.014 "strip_size_kb": 64, 00:14:11.014 "state": "online", 00:14:11.014 "raid_level": "raid5f", 00:14:11.014 "superblock": false, 00:14:11.014 "num_base_bdevs": 3, 00:14:11.014 "num_base_bdevs_discovered": 3, 00:14:11.014 "num_base_bdevs_operational": 3, 00:14:11.014 "process": { 00:14:11.014 "type": "rebuild", 00:14:11.014 "target": "spare", 00:14:11.014 "progress": { 00:14:11.014 "blocks": 20480, 00:14:11.014 "percent": 15 00:14:11.014 } 00:14:11.014 }, 00:14:11.014 "base_bdevs_list": [ 00:14:11.014 { 00:14:11.014 "name": "spare", 00:14:11.014 "uuid": "d21152ab-3394-5a55-be82-39e9c0d60d8e", 00:14:11.014 "is_configured": true, 00:14:11.014 "data_offset": 0, 00:14:11.014 "data_size": 65536 00:14:11.014 }, 00:14:11.014 { 00:14:11.014 "name": "BaseBdev2", 00:14:11.014 "uuid": "f332b1e7-54e1-5fa1-81ff-ffcc04117e21", 00:14:11.014 "is_configured": true, 00:14:11.014 "data_offset": 0, 00:14:11.014 "data_size": 65536 00:14:11.014 }, 00:14:11.014 { 00:14:11.014 "name": "BaseBdev3", 00:14:11.014 "uuid": "54e89053-0705-5479-a1bb-96707e41d9e6", 00:14:11.014 "is_configured": true, 00:14:11.014 "data_offset": 0, 00:14:11.014 "data_size": 65536 00:14:11.014 } 00:14:11.014 ] 00:14:11.014 }' 00:14:11.014 04:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:11.274 04:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:11.274 04:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:11.274 04:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:11.274 04:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:11.274 04:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:14:11.274 04:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:11.274 04:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=460 00:14:11.274 04:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:11.274 04:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:11.274 04:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:11.274 04:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:11.274 04:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:11.274 04:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:11.274 04:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.274 04:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.274 04:30:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.274 04:30:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.274 04:30:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.274 04:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:11.274 "name": "raid_bdev1", 00:14:11.274 "uuid": "d4f71865-6630-4dc0-b383-22e04516d299", 00:14:11.274 "strip_size_kb": 64, 00:14:11.274 "state": "online", 00:14:11.274 "raid_level": "raid5f", 00:14:11.274 "superblock": false, 00:14:11.274 "num_base_bdevs": 3, 00:14:11.274 "num_base_bdevs_discovered": 3, 00:14:11.274 "num_base_bdevs_operational": 3, 00:14:11.274 "process": { 00:14:11.274 "type": "rebuild", 00:14:11.274 "target": "spare", 00:14:11.274 "progress": { 00:14:11.274 "blocks": 22528, 00:14:11.274 "percent": 17 00:14:11.274 } 00:14:11.274 }, 00:14:11.274 "base_bdevs_list": [ 00:14:11.274 { 00:14:11.274 "name": "spare", 00:14:11.274 "uuid": "d21152ab-3394-5a55-be82-39e9c0d60d8e", 00:14:11.274 "is_configured": true, 00:14:11.274 "data_offset": 0, 00:14:11.274 "data_size": 65536 00:14:11.274 }, 00:14:11.274 { 00:14:11.274 "name": "BaseBdev2", 00:14:11.274 "uuid": "f332b1e7-54e1-5fa1-81ff-ffcc04117e21", 00:14:11.274 "is_configured": true, 00:14:11.274 "data_offset": 0, 00:14:11.274 "data_size": 65536 00:14:11.274 }, 00:14:11.274 { 00:14:11.274 "name": "BaseBdev3", 00:14:11.274 "uuid": "54e89053-0705-5479-a1bb-96707e41d9e6", 00:14:11.274 "is_configured": true, 00:14:11.274 "data_offset": 0, 00:14:11.274 "data_size": 65536 00:14:11.274 } 00:14:11.274 ] 00:14:11.274 }' 00:14:11.274 04:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:11.274 04:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:11.274 04:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:11.274 04:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:11.274 04:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:12.655 04:30:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:12.655 04:30:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:12.655 04:30:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:12.655 04:30:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:12.655 04:30:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:12.655 04:30:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:12.655 04:30:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.655 04:30:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.655 04:30:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.655 04:30:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.655 04:30:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.655 04:30:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:12.655 "name": "raid_bdev1", 00:14:12.655 "uuid": "d4f71865-6630-4dc0-b383-22e04516d299", 00:14:12.655 "strip_size_kb": 64, 00:14:12.655 "state": "online", 00:14:12.655 "raid_level": "raid5f", 00:14:12.655 "superblock": false, 00:14:12.655 "num_base_bdevs": 3, 00:14:12.655 "num_base_bdevs_discovered": 3, 00:14:12.655 "num_base_bdevs_operational": 3, 00:14:12.655 "process": { 00:14:12.655 "type": "rebuild", 00:14:12.655 "target": "spare", 00:14:12.655 "progress": { 00:14:12.655 "blocks": 47104, 00:14:12.655 "percent": 35 00:14:12.655 } 00:14:12.655 }, 00:14:12.655 "base_bdevs_list": [ 00:14:12.655 { 00:14:12.655 "name": "spare", 00:14:12.655 "uuid": "d21152ab-3394-5a55-be82-39e9c0d60d8e", 00:14:12.655 "is_configured": true, 00:14:12.655 "data_offset": 0, 00:14:12.655 "data_size": 65536 00:14:12.655 }, 00:14:12.655 { 00:14:12.655 "name": "BaseBdev2", 00:14:12.655 "uuid": "f332b1e7-54e1-5fa1-81ff-ffcc04117e21", 00:14:12.655 "is_configured": true, 00:14:12.655 "data_offset": 0, 00:14:12.655 "data_size": 65536 00:14:12.655 }, 00:14:12.655 { 00:14:12.655 "name": "BaseBdev3", 00:14:12.655 "uuid": "54e89053-0705-5479-a1bb-96707e41d9e6", 00:14:12.655 "is_configured": true, 00:14:12.655 "data_offset": 0, 00:14:12.655 "data_size": 65536 00:14:12.655 } 00:14:12.656 ] 00:14:12.656 }' 00:14:12.656 04:30:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:12.656 04:30:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:12.656 04:30:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:12.656 04:30:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:12.656 04:30:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:13.595 04:30:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:13.595 04:30:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:13.595 04:30:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:13.595 04:30:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:13.595 04:30:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:13.595 04:30:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:13.595 04:30:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.595 04:30:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.595 04:30:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.595 04:30:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.595 04:30:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.595 04:30:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:13.595 "name": "raid_bdev1", 00:14:13.595 "uuid": "d4f71865-6630-4dc0-b383-22e04516d299", 00:14:13.595 "strip_size_kb": 64, 00:14:13.595 "state": "online", 00:14:13.595 "raid_level": "raid5f", 00:14:13.595 "superblock": false, 00:14:13.595 "num_base_bdevs": 3, 00:14:13.595 "num_base_bdevs_discovered": 3, 00:14:13.595 "num_base_bdevs_operational": 3, 00:14:13.595 "process": { 00:14:13.595 "type": "rebuild", 00:14:13.595 "target": "spare", 00:14:13.595 "progress": { 00:14:13.595 "blocks": 69632, 00:14:13.595 "percent": 53 00:14:13.595 } 00:14:13.595 }, 00:14:13.595 "base_bdevs_list": [ 00:14:13.595 { 00:14:13.595 "name": "spare", 00:14:13.595 "uuid": "d21152ab-3394-5a55-be82-39e9c0d60d8e", 00:14:13.595 "is_configured": true, 00:14:13.595 "data_offset": 0, 00:14:13.595 "data_size": 65536 00:14:13.595 }, 00:14:13.595 { 00:14:13.595 "name": "BaseBdev2", 00:14:13.595 "uuid": "f332b1e7-54e1-5fa1-81ff-ffcc04117e21", 00:14:13.595 "is_configured": true, 00:14:13.595 "data_offset": 0, 00:14:13.595 "data_size": 65536 00:14:13.595 }, 00:14:13.595 { 00:14:13.595 "name": "BaseBdev3", 00:14:13.595 "uuid": "54e89053-0705-5479-a1bb-96707e41d9e6", 00:14:13.595 "is_configured": true, 00:14:13.595 "data_offset": 0, 00:14:13.595 "data_size": 65536 00:14:13.595 } 00:14:13.595 ] 00:14:13.595 }' 00:14:13.595 04:30:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:13.595 04:30:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:13.595 04:30:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:13.595 04:30:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:13.595 04:30:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:14.978 04:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:14.978 04:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:14.978 04:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:14.978 04:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:14.978 04:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:14.978 04:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:14.978 04:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.978 04:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.978 04:30:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.978 04:30:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.978 04:30:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.978 04:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:14.978 "name": "raid_bdev1", 00:14:14.978 "uuid": "d4f71865-6630-4dc0-b383-22e04516d299", 00:14:14.978 "strip_size_kb": 64, 00:14:14.978 "state": "online", 00:14:14.978 "raid_level": "raid5f", 00:14:14.978 "superblock": false, 00:14:14.978 "num_base_bdevs": 3, 00:14:14.978 "num_base_bdevs_discovered": 3, 00:14:14.978 "num_base_bdevs_operational": 3, 00:14:14.978 "process": { 00:14:14.978 "type": "rebuild", 00:14:14.978 "target": "spare", 00:14:14.978 "progress": { 00:14:14.978 "blocks": 92160, 00:14:14.978 "percent": 70 00:14:14.978 } 00:14:14.978 }, 00:14:14.978 "base_bdevs_list": [ 00:14:14.978 { 00:14:14.978 "name": "spare", 00:14:14.978 "uuid": "d21152ab-3394-5a55-be82-39e9c0d60d8e", 00:14:14.978 "is_configured": true, 00:14:14.978 "data_offset": 0, 00:14:14.978 "data_size": 65536 00:14:14.978 }, 00:14:14.978 { 00:14:14.978 "name": "BaseBdev2", 00:14:14.978 "uuid": "f332b1e7-54e1-5fa1-81ff-ffcc04117e21", 00:14:14.978 "is_configured": true, 00:14:14.978 "data_offset": 0, 00:14:14.978 "data_size": 65536 00:14:14.978 }, 00:14:14.978 { 00:14:14.978 "name": "BaseBdev3", 00:14:14.979 "uuid": "54e89053-0705-5479-a1bb-96707e41d9e6", 00:14:14.979 "is_configured": true, 00:14:14.979 "data_offset": 0, 00:14:14.979 "data_size": 65536 00:14:14.979 } 00:14:14.979 ] 00:14:14.979 }' 00:14:14.979 04:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:14.979 04:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:14.979 04:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:14.979 04:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:14.979 04:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:15.918 04:30:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:15.918 04:30:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:15.918 04:30:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:15.918 04:30:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:15.918 04:30:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:15.918 04:30:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:15.918 04:30:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.918 04:30:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.918 04:30:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.918 04:30:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.918 04:30:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.918 04:30:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:15.918 "name": "raid_bdev1", 00:14:15.918 "uuid": "d4f71865-6630-4dc0-b383-22e04516d299", 00:14:15.918 "strip_size_kb": 64, 00:14:15.918 "state": "online", 00:14:15.918 "raid_level": "raid5f", 00:14:15.918 "superblock": false, 00:14:15.918 "num_base_bdevs": 3, 00:14:15.918 "num_base_bdevs_discovered": 3, 00:14:15.918 "num_base_bdevs_operational": 3, 00:14:15.918 "process": { 00:14:15.918 "type": "rebuild", 00:14:15.918 "target": "spare", 00:14:15.918 "progress": { 00:14:15.918 "blocks": 116736, 00:14:15.918 "percent": 89 00:14:15.918 } 00:14:15.918 }, 00:14:15.918 "base_bdevs_list": [ 00:14:15.918 { 00:14:15.918 "name": "spare", 00:14:15.918 "uuid": "d21152ab-3394-5a55-be82-39e9c0d60d8e", 00:14:15.918 "is_configured": true, 00:14:15.918 "data_offset": 0, 00:14:15.918 "data_size": 65536 00:14:15.918 }, 00:14:15.918 { 00:14:15.918 "name": "BaseBdev2", 00:14:15.918 "uuid": "f332b1e7-54e1-5fa1-81ff-ffcc04117e21", 00:14:15.918 "is_configured": true, 00:14:15.918 "data_offset": 0, 00:14:15.918 "data_size": 65536 00:14:15.918 }, 00:14:15.918 { 00:14:15.918 "name": "BaseBdev3", 00:14:15.918 "uuid": "54e89053-0705-5479-a1bb-96707e41d9e6", 00:14:15.918 "is_configured": true, 00:14:15.918 "data_offset": 0, 00:14:15.918 "data_size": 65536 00:14:15.918 } 00:14:15.918 ] 00:14:15.918 }' 00:14:15.918 04:30:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:15.918 04:30:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:15.918 04:30:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:15.918 04:30:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:15.918 04:30:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:16.488 [2024-12-13 04:30:16.397521] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:16.488 [2024-12-13 04:30:16.397591] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:16.488 [2024-12-13 04:30:16.397634] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:17.057 04:30:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:17.057 04:30:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:17.057 04:30:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:17.057 04:30:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:17.057 04:30:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:17.057 04:30:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:17.057 04:30:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.057 04:30:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.057 04:30:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.057 04:30:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.057 04:30:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.057 04:30:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:17.057 "name": "raid_bdev1", 00:14:17.057 "uuid": "d4f71865-6630-4dc0-b383-22e04516d299", 00:14:17.057 "strip_size_kb": 64, 00:14:17.057 "state": "online", 00:14:17.057 "raid_level": "raid5f", 00:14:17.057 "superblock": false, 00:14:17.057 "num_base_bdevs": 3, 00:14:17.057 "num_base_bdevs_discovered": 3, 00:14:17.057 "num_base_bdevs_operational": 3, 00:14:17.057 "base_bdevs_list": [ 00:14:17.057 { 00:14:17.057 "name": "spare", 00:14:17.057 "uuid": "d21152ab-3394-5a55-be82-39e9c0d60d8e", 00:14:17.057 "is_configured": true, 00:14:17.057 "data_offset": 0, 00:14:17.057 "data_size": 65536 00:14:17.057 }, 00:14:17.057 { 00:14:17.057 "name": "BaseBdev2", 00:14:17.057 "uuid": "f332b1e7-54e1-5fa1-81ff-ffcc04117e21", 00:14:17.057 "is_configured": true, 00:14:17.057 "data_offset": 0, 00:14:17.057 "data_size": 65536 00:14:17.057 }, 00:14:17.057 { 00:14:17.057 "name": "BaseBdev3", 00:14:17.057 "uuid": "54e89053-0705-5479-a1bb-96707e41d9e6", 00:14:17.057 "is_configured": true, 00:14:17.057 "data_offset": 0, 00:14:17.057 "data_size": 65536 00:14:17.057 } 00:14:17.057 ] 00:14:17.057 }' 00:14:17.057 04:30:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:17.057 04:30:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:17.057 04:30:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:17.057 04:30:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:17.057 04:30:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:17.058 04:30:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:17.058 04:30:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:17.058 04:30:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:17.058 04:30:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:17.058 04:30:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:17.058 04:30:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.058 04:30:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.058 04:30:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.058 04:30:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.058 04:30:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.318 04:30:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:17.318 "name": "raid_bdev1", 00:14:17.318 "uuid": "d4f71865-6630-4dc0-b383-22e04516d299", 00:14:17.318 "strip_size_kb": 64, 00:14:17.318 "state": "online", 00:14:17.318 "raid_level": "raid5f", 00:14:17.318 "superblock": false, 00:14:17.318 "num_base_bdevs": 3, 00:14:17.318 "num_base_bdevs_discovered": 3, 00:14:17.318 "num_base_bdevs_operational": 3, 00:14:17.318 "base_bdevs_list": [ 00:14:17.318 { 00:14:17.318 "name": "spare", 00:14:17.318 "uuid": "d21152ab-3394-5a55-be82-39e9c0d60d8e", 00:14:17.318 "is_configured": true, 00:14:17.318 "data_offset": 0, 00:14:17.318 "data_size": 65536 00:14:17.318 }, 00:14:17.318 { 00:14:17.318 "name": "BaseBdev2", 00:14:17.318 "uuid": "f332b1e7-54e1-5fa1-81ff-ffcc04117e21", 00:14:17.318 "is_configured": true, 00:14:17.318 "data_offset": 0, 00:14:17.318 "data_size": 65536 00:14:17.318 }, 00:14:17.318 { 00:14:17.318 "name": "BaseBdev3", 00:14:17.318 "uuid": "54e89053-0705-5479-a1bb-96707e41d9e6", 00:14:17.318 "is_configured": true, 00:14:17.318 "data_offset": 0, 00:14:17.318 "data_size": 65536 00:14:17.318 } 00:14:17.318 ] 00:14:17.318 }' 00:14:17.318 04:30:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:17.318 04:30:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:17.318 04:30:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:17.318 04:30:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:17.318 04:30:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:17.318 04:30:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:17.318 04:30:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:17.318 04:30:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:17.318 04:30:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:17.318 04:30:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:17.318 04:30:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.318 04:30:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.318 04:30:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.318 04:30:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.318 04:30:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.318 04:30:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.318 04:30:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.318 04:30:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.318 04:30:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.318 04:30:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.318 "name": "raid_bdev1", 00:14:17.318 "uuid": "d4f71865-6630-4dc0-b383-22e04516d299", 00:14:17.318 "strip_size_kb": 64, 00:14:17.318 "state": "online", 00:14:17.318 "raid_level": "raid5f", 00:14:17.318 "superblock": false, 00:14:17.318 "num_base_bdevs": 3, 00:14:17.318 "num_base_bdevs_discovered": 3, 00:14:17.318 "num_base_bdevs_operational": 3, 00:14:17.318 "base_bdevs_list": [ 00:14:17.318 { 00:14:17.318 "name": "spare", 00:14:17.318 "uuid": "d21152ab-3394-5a55-be82-39e9c0d60d8e", 00:14:17.318 "is_configured": true, 00:14:17.318 "data_offset": 0, 00:14:17.318 "data_size": 65536 00:14:17.318 }, 00:14:17.318 { 00:14:17.318 "name": "BaseBdev2", 00:14:17.318 "uuid": "f332b1e7-54e1-5fa1-81ff-ffcc04117e21", 00:14:17.318 "is_configured": true, 00:14:17.318 "data_offset": 0, 00:14:17.318 "data_size": 65536 00:14:17.318 }, 00:14:17.318 { 00:14:17.318 "name": "BaseBdev3", 00:14:17.318 "uuid": "54e89053-0705-5479-a1bb-96707e41d9e6", 00:14:17.318 "is_configured": true, 00:14:17.318 "data_offset": 0, 00:14:17.318 "data_size": 65536 00:14:17.318 } 00:14:17.318 ] 00:14:17.318 }' 00:14:17.318 04:30:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.318 04:30:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.888 04:30:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:17.888 04:30:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.888 04:30:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.888 [2024-12-13 04:30:17.664595] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:17.888 [2024-12-13 04:30:17.664631] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:17.888 [2024-12-13 04:30:17.664730] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:17.888 [2024-12-13 04:30:17.664821] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:17.888 [2024-12-13 04:30:17.664836] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:14:17.888 04:30:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.888 04:30:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.888 04:30:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:17.888 04:30:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.888 04:30:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.888 04:30:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.888 04:30:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:17.888 04:30:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:17.888 04:30:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:17.888 04:30:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:17.888 04:30:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:17.888 04:30:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:17.888 04:30:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:17.888 04:30:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:17.888 04:30:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:17.888 04:30:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:17.888 04:30:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:17.888 04:30:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:17.888 04:30:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:18.148 /dev/nbd0 00:14:18.148 04:30:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:18.148 04:30:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:18.148 04:30:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:18.148 04:30:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:18.148 04:30:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:18.148 04:30:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:18.148 04:30:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:18.148 04:30:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:18.148 04:30:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:18.148 04:30:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:18.148 04:30:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:18.148 1+0 records in 00:14:18.148 1+0 records out 00:14:18.148 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366167 s, 11.2 MB/s 00:14:18.148 04:30:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:18.148 04:30:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:18.148 04:30:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:18.148 04:30:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:18.148 04:30:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:18.148 04:30:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:18.148 04:30:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:18.148 04:30:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:18.408 /dev/nbd1 00:14:18.408 04:30:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:18.408 04:30:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:18.408 04:30:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:18.408 04:30:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:18.408 04:30:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:18.408 04:30:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:18.408 04:30:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:18.408 04:30:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:18.408 04:30:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:18.408 04:30:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:18.408 04:30:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:18.408 1+0 records in 00:14:18.408 1+0 records out 00:14:18.408 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000416871 s, 9.8 MB/s 00:14:18.408 04:30:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:18.408 04:30:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:18.408 04:30:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:18.408 04:30:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:18.408 04:30:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:18.408 04:30:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:18.408 04:30:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:18.408 04:30:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:18.408 04:30:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:18.408 04:30:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:18.408 04:30:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:18.409 04:30:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:18.409 04:30:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:18.409 04:30:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:18.409 04:30:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:18.668 04:30:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:18.668 04:30:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:18.668 04:30:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:18.668 04:30:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:18.668 04:30:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:18.668 04:30:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:18.668 04:30:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:18.668 04:30:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:18.668 04:30:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:18.668 04:30:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:18.929 04:30:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:18.929 04:30:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:18.929 04:30:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:18.929 04:30:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:18.929 04:30:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:18.929 04:30:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:18.929 04:30:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:18.929 04:30:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:18.929 04:30:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:18.929 04:30:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 93830 00:14:18.929 04:30:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 93830 ']' 00:14:18.929 04:30:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 93830 00:14:18.929 04:30:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:14:18.929 04:30:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:18.929 04:30:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93830 00:14:18.929 killing process with pid 93830 00:14:18.929 Received shutdown signal, test time was about 60.000000 seconds 00:14:18.929 00:14:18.929 Latency(us) 00:14:18.929 [2024-12-13T04:30:18.944Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:18.929 [2024-12-13T04:30:18.944Z] =================================================================================================================== 00:14:18.929 [2024-12-13T04:30:18.944Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:18.929 04:30:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:18.929 04:30:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:18.929 04:30:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93830' 00:14:18.929 04:30:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 93830 00:14:18.929 [2024-12-13 04:30:18.755543] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:18.929 04:30:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 93830 00:14:18.929 [2024-12-13 04:30:18.830575] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:19.189 04:30:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:19.189 00:14:19.189 real 0m13.907s 00:14:19.189 user 0m17.314s 00:14:19.189 sys 0m2.142s 00:14:19.189 04:30:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:19.189 04:30:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.189 ************************************ 00:14:19.189 END TEST raid5f_rebuild_test 00:14:19.189 ************************************ 00:14:19.189 04:30:19 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:14:19.189 04:30:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:19.189 04:30:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:19.449 04:30:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:19.449 ************************************ 00:14:19.449 START TEST raid5f_rebuild_test_sb 00:14:19.449 ************************************ 00:14:19.449 04:30:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:14:19.449 04:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:19.449 04:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:14:19.449 04:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:19.449 04:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:19.449 04:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:19.449 04:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:19.449 04:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:19.449 04:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:19.449 04:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:19.449 04:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:19.449 04:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:19.449 04:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:19.449 04:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:19.449 04:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:19.449 04:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:19.449 04:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:19.449 04:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:19.449 04:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:19.449 04:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:19.449 04:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:19.449 04:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:19.449 04:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:19.449 04:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:19.449 04:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:19.449 04:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:19.449 04:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:19.449 04:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:19.449 04:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:19.449 04:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:19.449 04:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=94253 00:14:19.449 04:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 94253 00:14:19.449 04:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:19.449 04:30:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 94253 ']' 00:14:19.449 04:30:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.449 04:30:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:19.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.449 04:30:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.449 04:30:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:19.449 04:30:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.449 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:19.449 Zero copy mechanism will not be used. 00:14:19.449 [2024-12-13 04:30:19.327460] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:14:19.449 [2024-12-13 04:30:19.327588] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94253 ] 00:14:19.709 [2024-12-13 04:30:19.481149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.709 [2024-12-13 04:30:19.521588] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.709 [2024-12-13 04:30:19.598726] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:19.709 [2024-12-13 04:30:19.598765] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:20.279 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:20.279 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:20.279 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:20.279 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:20.279 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.279 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.279 BaseBdev1_malloc 00:14:20.279 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.279 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:20.279 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.279 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.279 [2024-12-13 04:30:20.160961] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:20.279 [2024-12-13 04:30:20.161020] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.279 [2024-12-13 04:30:20.161050] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:14:20.279 [2024-12-13 04:30:20.161063] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.279 [2024-12-13 04:30:20.163446] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.279 [2024-12-13 04:30:20.163491] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:20.279 BaseBdev1 00:14:20.279 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.279 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:20.279 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:20.279 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.279 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.279 BaseBdev2_malloc 00:14:20.279 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.279 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:20.279 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.279 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.279 [2024-12-13 04:30:20.195553] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:20.279 [2024-12-13 04:30:20.195599] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.279 [2024-12-13 04:30:20.195623] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:20.279 [2024-12-13 04:30:20.195632] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.279 [2024-12-13 04:30:20.197935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.279 [2024-12-13 04:30:20.197990] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:20.279 BaseBdev2 00:14:20.279 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.279 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:20.279 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:20.279 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.279 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.279 BaseBdev3_malloc 00:14:20.279 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.279 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:20.279 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.279 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.279 [2024-12-13 04:30:20.230198] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:20.279 [2024-12-13 04:30:20.230251] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.279 [2024-12-13 04:30:20.230278] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:20.279 [2024-12-13 04:30:20.230287] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.279 [2024-12-13 04:30:20.232692] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.279 [2024-12-13 04:30:20.232724] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:20.279 BaseBdev3 00:14:20.279 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.279 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:20.279 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.279 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.279 spare_malloc 00:14:20.279 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.279 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:20.279 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.279 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.279 spare_delay 00:14:20.279 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.279 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:20.279 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.279 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.279 [2024-12-13 04:30:20.285835] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:20.279 [2024-12-13 04:30:20.285885] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.279 [2024-12-13 04:30:20.285909] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:20.279 [2024-12-13 04:30:20.285918] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.279 [2024-12-13 04:30:20.288374] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.279 [2024-12-13 04:30:20.288407] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:20.279 spare 00:14:20.279 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.279 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:14:20.279 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.279 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.539 [2024-12-13 04:30:20.297891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:20.539 [2024-12-13 04:30:20.300007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:20.539 [2024-12-13 04:30:20.300065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:20.539 [2024-12-13 04:30:20.300224] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:14:20.539 [2024-12-13 04:30:20.300238] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:20.539 [2024-12-13 04:30:20.300512] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:14:20.539 [2024-12-13 04:30:20.300953] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:14:20.539 [2024-12-13 04:30:20.300976] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:14:20.539 [2024-12-13 04:30:20.301098] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:20.539 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.539 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:20.539 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:20.539 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:20.539 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:20.539 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:20.539 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:20.539 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.539 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.539 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.539 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.539 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.539 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.539 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.539 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.539 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.539 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.539 "name": "raid_bdev1", 00:14:20.539 "uuid": "e5a80e35-ee58-4b4c-9697-86fb9e5cdcf7", 00:14:20.539 "strip_size_kb": 64, 00:14:20.539 "state": "online", 00:14:20.539 "raid_level": "raid5f", 00:14:20.539 "superblock": true, 00:14:20.539 "num_base_bdevs": 3, 00:14:20.539 "num_base_bdevs_discovered": 3, 00:14:20.539 "num_base_bdevs_operational": 3, 00:14:20.539 "base_bdevs_list": [ 00:14:20.539 { 00:14:20.539 "name": "BaseBdev1", 00:14:20.539 "uuid": "6c8c48c4-0489-5a62-ad39-78e738985ebd", 00:14:20.539 "is_configured": true, 00:14:20.539 "data_offset": 2048, 00:14:20.539 "data_size": 63488 00:14:20.539 }, 00:14:20.539 { 00:14:20.539 "name": "BaseBdev2", 00:14:20.539 "uuid": "e1ed51b5-cf1e-565f-93bd-5c2fb1d7a8c0", 00:14:20.539 "is_configured": true, 00:14:20.539 "data_offset": 2048, 00:14:20.539 "data_size": 63488 00:14:20.539 }, 00:14:20.539 { 00:14:20.539 "name": "BaseBdev3", 00:14:20.539 "uuid": "47ee8698-be8f-54a3-b8e0-6d311574919b", 00:14:20.539 "is_configured": true, 00:14:20.539 "data_offset": 2048, 00:14:20.539 "data_size": 63488 00:14:20.539 } 00:14:20.539 ] 00:14:20.539 }' 00:14:20.539 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.539 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.799 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:20.799 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:20.799 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.799 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.799 [2024-12-13 04:30:20.762755] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:20.799 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.799 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:14:20.799 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.799 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:20.799 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.799 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.059 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.059 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:21.059 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:21.059 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:21.059 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:21.059 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:21.059 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:21.059 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:21.059 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:21.059 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:21.059 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:21.059 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:21.060 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:21.060 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:21.060 04:30:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:21.060 [2024-12-13 04:30:21.038103] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:14:21.060 /dev/nbd0 00:14:21.319 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:21.319 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:21.319 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:21.319 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:21.319 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:21.319 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:21.319 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:21.319 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:21.319 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:21.319 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:21.319 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:21.319 1+0 records in 00:14:21.319 1+0 records out 00:14:21.319 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000578464 s, 7.1 MB/s 00:14:21.319 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:21.319 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:21.319 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:21.319 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:21.319 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:21.319 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:21.320 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:21.320 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:21.320 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:14:21.320 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:14:21.320 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:14:21.579 496+0 records in 00:14:21.579 496+0 records out 00:14:21.579 65011712 bytes (65 MB, 62 MiB) copied, 0.330541 s, 197 MB/s 00:14:21.579 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:21.579 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:21.579 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:21.579 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:21.579 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:21.579 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:21.579 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:21.839 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:21.839 [2024-12-13 04:30:21.665712] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:21.839 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:21.839 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:21.839 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:21.839 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:21.839 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:21.839 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:21.839 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:21.839 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:21.839 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.839 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.839 [2024-12-13 04:30:21.681790] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:21.839 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.839 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:21.839 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:21.839 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:21.839 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:21.839 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:21.839 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:21.839 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.839 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.839 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.839 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.839 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.839 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.839 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.839 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.839 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.839 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.839 "name": "raid_bdev1", 00:14:21.839 "uuid": "e5a80e35-ee58-4b4c-9697-86fb9e5cdcf7", 00:14:21.839 "strip_size_kb": 64, 00:14:21.839 "state": "online", 00:14:21.839 "raid_level": "raid5f", 00:14:21.839 "superblock": true, 00:14:21.839 "num_base_bdevs": 3, 00:14:21.839 "num_base_bdevs_discovered": 2, 00:14:21.839 "num_base_bdevs_operational": 2, 00:14:21.839 "base_bdevs_list": [ 00:14:21.839 { 00:14:21.839 "name": null, 00:14:21.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.839 "is_configured": false, 00:14:21.839 "data_offset": 0, 00:14:21.839 "data_size": 63488 00:14:21.839 }, 00:14:21.839 { 00:14:21.839 "name": "BaseBdev2", 00:14:21.839 "uuid": "e1ed51b5-cf1e-565f-93bd-5c2fb1d7a8c0", 00:14:21.839 "is_configured": true, 00:14:21.839 "data_offset": 2048, 00:14:21.839 "data_size": 63488 00:14:21.839 }, 00:14:21.839 { 00:14:21.839 "name": "BaseBdev3", 00:14:21.839 "uuid": "47ee8698-be8f-54a3-b8e0-6d311574919b", 00:14:21.839 "is_configured": true, 00:14:21.839 "data_offset": 2048, 00:14:21.839 "data_size": 63488 00:14:21.839 } 00:14:21.839 ] 00:14:21.839 }' 00:14:21.839 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.839 04:30:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.409 04:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:22.409 04:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.409 04:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.409 [2024-12-13 04:30:22.192918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:22.409 [2024-12-13 04:30:22.200939] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000255d0 00:14:22.409 04:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.409 04:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:22.410 [2024-12-13 04:30:22.203526] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:23.396 04:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:23.396 04:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.396 04:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:23.396 04:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:23.396 04:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.396 04:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.396 04:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.396 04:30:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.396 04:30:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.396 04:30:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.397 04:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.397 "name": "raid_bdev1", 00:14:23.397 "uuid": "e5a80e35-ee58-4b4c-9697-86fb9e5cdcf7", 00:14:23.397 "strip_size_kb": 64, 00:14:23.397 "state": "online", 00:14:23.397 "raid_level": "raid5f", 00:14:23.397 "superblock": true, 00:14:23.397 "num_base_bdevs": 3, 00:14:23.397 "num_base_bdevs_discovered": 3, 00:14:23.397 "num_base_bdevs_operational": 3, 00:14:23.397 "process": { 00:14:23.397 "type": "rebuild", 00:14:23.397 "target": "spare", 00:14:23.397 "progress": { 00:14:23.397 "blocks": 20480, 00:14:23.397 "percent": 16 00:14:23.397 } 00:14:23.397 }, 00:14:23.397 "base_bdevs_list": [ 00:14:23.397 { 00:14:23.397 "name": "spare", 00:14:23.397 "uuid": "c1b66362-8b05-5aea-9647-55d29e1062ca", 00:14:23.397 "is_configured": true, 00:14:23.397 "data_offset": 2048, 00:14:23.397 "data_size": 63488 00:14:23.397 }, 00:14:23.397 { 00:14:23.397 "name": "BaseBdev2", 00:14:23.397 "uuid": "e1ed51b5-cf1e-565f-93bd-5c2fb1d7a8c0", 00:14:23.397 "is_configured": true, 00:14:23.397 "data_offset": 2048, 00:14:23.397 "data_size": 63488 00:14:23.397 }, 00:14:23.397 { 00:14:23.397 "name": "BaseBdev3", 00:14:23.397 "uuid": "47ee8698-be8f-54a3-b8e0-6d311574919b", 00:14:23.397 "is_configured": true, 00:14:23.397 "data_offset": 2048, 00:14:23.397 "data_size": 63488 00:14:23.397 } 00:14:23.397 ] 00:14:23.397 }' 00:14:23.397 04:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:23.397 04:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:23.397 04:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:23.397 04:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:23.397 04:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:23.397 04:30:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.397 04:30:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.397 [2024-12-13 04:30:23.366898] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:23.673 [2024-12-13 04:30:23.412488] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:23.673 [2024-12-13 04:30:23.412901] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:23.673 [2024-12-13 04:30:23.412923] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:23.673 [2024-12-13 04:30:23.412945] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:23.673 04:30:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.673 04:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:23.673 04:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:23.673 04:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:23.673 04:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:23.673 04:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:23.673 04:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:23.673 04:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.673 04:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.673 04:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.673 04:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.673 04:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.673 04:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.673 04:30:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.673 04:30:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.673 04:30:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.673 04:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.673 "name": "raid_bdev1", 00:14:23.673 "uuid": "e5a80e35-ee58-4b4c-9697-86fb9e5cdcf7", 00:14:23.673 "strip_size_kb": 64, 00:14:23.673 "state": "online", 00:14:23.673 "raid_level": "raid5f", 00:14:23.673 "superblock": true, 00:14:23.673 "num_base_bdevs": 3, 00:14:23.673 "num_base_bdevs_discovered": 2, 00:14:23.673 "num_base_bdevs_operational": 2, 00:14:23.673 "base_bdevs_list": [ 00:14:23.673 { 00:14:23.673 "name": null, 00:14:23.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.673 "is_configured": false, 00:14:23.673 "data_offset": 0, 00:14:23.673 "data_size": 63488 00:14:23.673 }, 00:14:23.673 { 00:14:23.673 "name": "BaseBdev2", 00:14:23.673 "uuid": "e1ed51b5-cf1e-565f-93bd-5c2fb1d7a8c0", 00:14:23.673 "is_configured": true, 00:14:23.673 "data_offset": 2048, 00:14:23.673 "data_size": 63488 00:14:23.673 }, 00:14:23.673 { 00:14:23.673 "name": "BaseBdev3", 00:14:23.673 "uuid": "47ee8698-be8f-54a3-b8e0-6d311574919b", 00:14:23.673 "is_configured": true, 00:14:23.673 "data_offset": 2048, 00:14:23.673 "data_size": 63488 00:14:23.673 } 00:14:23.673 ] 00:14:23.673 }' 00:14:23.673 04:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.673 04:30:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.933 04:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:23.933 04:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.933 04:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:23.933 04:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:23.933 04:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.933 04:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.933 04:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.933 04:30:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.933 04:30:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.933 04:30:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.193 04:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.193 "name": "raid_bdev1", 00:14:24.193 "uuid": "e5a80e35-ee58-4b4c-9697-86fb9e5cdcf7", 00:14:24.193 "strip_size_kb": 64, 00:14:24.193 "state": "online", 00:14:24.193 "raid_level": "raid5f", 00:14:24.193 "superblock": true, 00:14:24.193 "num_base_bdevs": 3, 00:14:24.193 "num_base_bdevs_discovered": 2, 00:14:24.193 "num_base_bdevs_operational": 2, 00:14:24.193 "base_bdevs_list": [ 00:14:24.193 { 00:14:24.193 "name": null, 00:14:24.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.193 "is_configured": false, 00:14:24.193 "data_offset": 0, 00:14:24.193 "data_size": 63488 00:14:24.193 }, 00:14:24.193 { 00:14:24.193 "name": "BaseBdev2", 00:14:24.193 "uuid": "e1ed51b5-cf1e-565f-93bd-5c2fb1d7a8c0", 00:14:24.193 "is_configured": true, 00:14:24.193 "data_offset": 2048, 00:14:24.193 "data_size": 63488 00:14:24.193 }, 00:14:24.193 { 00:14:24.193 "name": "BaseBdev3", 00:14:24.193 "uuid": "47ee8698-be8f-54a3-b8e0-6d311574919b", 00:14:24.193 "is_configured": true, 00:14:24.193 "data_offset": 2048, 00:14:24.193 "data_size": 63488 00:14:24.193 } 00:14:24.193 ] 00:14:24.193 }' 00:14:24.193 04:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:24.193 04:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:24.193 04:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:24.193 04:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:24.193 04:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:24.193 04:30:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.193 04:30:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.193 [2024-12-13 04:30:24.061429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:24.193 [2024-12-13 04:30:24.066436] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000256a0 00:14:24.193 04:30:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.193 04:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:24.193 [2024-12-13 04:30:24.068965] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:25.133 04:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:25.133 04:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:25.133 04:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:25.133 04:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:25.133 04:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:25.133 04:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.133 04:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.133 04:30:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.133 04:30:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.133 04:30:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.133 04:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:25.133 "name": "raid_bdev1", 00:14:25.133 "uuid": "e5a80e35-ee58-4b4c-9697-86fb9e5cdcf7", 00:14:25.133 "strip_size_kb": 64, 00:14:25.133 "state": "online", 00:14:25.133 "raid_level": "raid5f", 00:14:25.133 "superblock": true, 00:14:25.133 "num_base_bdevs": 3, 00:14:25.133 "num_base_bdevs_discovered": 3, 00:14:25.133 "num_base_bdevs_operational": 3, 00:14:25.133 "process": { 00:14:25.133 "type": "rebuild", 00:14:25.133 "target": "spare", 00:14:25.133 "progress": { 00:14:25.133 "blocks": 20480, 00:14:25.133 "percent": 16 00:14:25.133 } 00:14:25.133 }, 00:14:25.133 "base_bdevs_list": [ 00:14:25.133 { 00:14:25.133 "name": "spare", 00:14:25.133 "uuid": "c1b66362-8b05-5aea-9647-55d29e1062ca", 00:14:25.133 "is_configured": true, 00:14:25.133 "data_offset": 2048, 00:14:25.133 "data_size": 63488 00:14:25.133 }, 00:14:25.133 { 00:14:25.133 "name": "BaseBdev2", 00:14:25.133 "uuid": "e1ed51b5-cf1e-565f-93bd-5c2fb1d7a8c0", 00:14:25.133 "is_configured": true, 00:14:25.133 "data_offset": 2048, 00:14:25.133 "data_size": 63488 00:14:25.133 }, 00:14:25.133 { 00:14:25.133 "name": "BaseBdev3", 00:14:25.133 "uuid": "47ee8698-be8f-54a3-b8e0-6d311574919b", 00:14:25.133 "is_configured": true, 00:14:25.133 "data_offset": 2048, 00:14:25.133 "data_size": 63488 00:14:25.133 } 00:14:25.133 ] 00:14:25.133 }' 00:14:25.133 04:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:25.392 04:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:25.392 04:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:25.392 04:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:25.392 04:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:25.392 04:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:25.392 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:25.392 04:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:14:25.392 04:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:25.392 04:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=474 00:14:25.392 04:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:25.392 04:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:25.392 04:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:25.392 04:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:25.392 04:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:25.392 04:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:25.392 04:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.392 04:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.392 04:30:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.392 04:30:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.392 04:30:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.392 04:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:25.392 "name": "raid_bdev1", 00:14:25.392 "uuid": "e5a80e35-ee58-4b4c-9697-86fb9e5cdcf7", 00:14:25.392 "strip_size_kb": 64, 00:14:25.392 "state": "online", 00:14:25.392 "raid_level": "raid5f", 00:14:25.392 "superblock": true, 00:14:25.392 "num_base_bdevs": 3, 00:14:25.392 "num_base_bdevs_discovered": 3, 00:14:25.393 "num_base_bdevs_operational": 3, 00:14:25.393 "process": { 00:14:25.393 "type": "rebuild", 00:14:25.393 "target": "spare", 00:14:25.393 "progress": { 00:14:25.393 "blocks": 22528, 00:14:25.393 "percent": 17 00:14:25.393 } 00:14:25.393 }, 00:14:25.393 "base_bdevs_list": [ 00:14:25.393 { 00:14:25.393 "name": "spare", 00:14:25.393 "uuid": "c1b66362-8b05-5aea-9647-55d29e1062ca", 00:14:25.393 "is_configured": true, 00:14:25.393 "data_offset": 2048, 00:14:25.393 "data_size": 63488 00:14:25.393 }, 00:14:25.393 { 00:14:25.393 "name": "BaseBdev2", 00:14:25.393 "uuid": "e1ed51b5-cf1e-565f-93bd-5c2fb1d7a8c0", 00:14:25.393 "is_configured": true, 00:14:25.393 "data_offset": 2048, 00:14:25.393 "data_size": 63488 00:14:25.393 }, 00:14:25.393 { 00:14:25.393 "name": "BaseBdev3", 00:14:25.393 "uuid": "47ee8698-be8f-54a3-b8e0-6d311574919b", 00:14:25.393 "is_configured": true, 00:14:25.393 "data_offset": 2048, 00:14:25.393 "data_size": 63488 00:14:25.393 } 00:14:25.393 ] 00:14:25.393 }' 00:14:25.393 04:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:25.393 04:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:25.393 04:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:25.393 04:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:25.393 04:30:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:26.774 04:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:26.774 04:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:26.774 04:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:26.774 04:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:26.774 04:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:26.774 04:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:26.774 04:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.774 04:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.774 04:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.774 04:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.774 04:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.774 04:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:26.774 "name": "raid_bdev1", 00:14:26.774 "uuid": "e5a80e35-ee58-4b4c-9697-86fb9e5cdcf7", 00:14:26.774 "strip_size_kb": 64, 00:14:26.774 "state": "online", 00:14:26.774 "raid_level": "raid5f", 00:14:26.774 "superblock": true, 00:14:26.774 "num_base_bdevs": 3, 00:14:26.774 "num_base_bdevs_discovered": 3, 00:14:26.774 "num_base_bdevs_operational": 3, 00:14:26.774 "process": { 00:14:26.774 "type": "rebuild", 00:14:26.774 "target": "spare", 00:14:26.774 "progress": { 00:14:26.774 "blocks": 47104, 00:14:26.774 "percent": 37 00:14:26.774 } 00:14:26.774 }, 00:14:26.774 "base_bdevs_list": [ 00:14:26.774 { 00:14:26.774 "name": "spare", 00:14:26.774 "uuid": "c1b66362-8b05-5aea-9647-55d29e1062ca", 00:14:26.774 "is_configured": true, 00:14:26.774 "data_offset": 2048, 00:14:26.774 "data_size": 63488 00:14:26.774 }, 00:14:26.774 { 00:14:26.774 "name": "BaseBdev2", 00:14:26.774 "uuid": "e1ed51b5-cf1e-565f-93bd-5c2fb1d7a8c0", 00:14:26.774 "is_configured": true, 00:14:26.774 "data_offset": 2048, 00:14:26.774 "data_size": 63488 00:14:26.774 }, 00:14:26.774 { 00:14:26.774 "name": "BaseBdev3", 00:14:26.774 "uuid": "47ee8698-be8f-54a3-b8e0-6d311574919b", 00:14:26.774 "is_configured": true, 00:14:26.774 "data_offset": 2048, 00:14:26.774 "data_size": 63488 00:14:26.774 } 00:14:26.774 ] 00:14:26.774 }' 00:14:26.774 04:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:26.774 04:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:26.774 04:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.774 04:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:26.774 04:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:27.714 04:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:27.714 04:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:27.714 04:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.714 04:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:27.714 04:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:27.714 04:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.714 04:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.714 04:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.714 04:30:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.714 04:30:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.714 04:30:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.714 04:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.714 "name": "raid_bdev1", 00:14:27.714 "uuid": "e5a80e35-ee58-4b4c-9697-86fb9e5cdcf7", 00:14:27.714 "strip_size_kb": 64, 00:14:27.714 "state": "online", 00:14:27.714 "raid_level": "raid5f", 00:14:27.714 "superblock": true, 00:14:27.714 "num_base_bdevs": 3, 00:14:27.714 "num_base_bdevs_discovered": 3, 00:14:27.714 "num_base_bdevs_operational": 3, 00:14:27.714 "process": { 00:14:27.714 "type": "rebuild", 00:14:27.714 "target": "spare", 00:14:27.714 "progress": { 00:14:27.714 "blocks": 69632, 00:14:27.714 "percent": 54 00:14:27.714 } 00:14:27.714 }, 00:14:27.714 "base_bdevs_list": [ 00:14:27.714 { 00:14:27.714 "name": "spare", 00:14:27.714 "uuid": "c1b66362-8b05-5aea-9647-55d29e1062ca", 00:14:27.714 "is_configured": true, 00:14:27.714 "data_offset": 2048, 00:14:27.714 "data_size": 63488 00:14:27.714 }, 00:14:27.714 { 00:14:27.714 "name": "BaseBdev2", 00:14:27.714 "uuid": "e1ed51b5-cf1e-565f-93bd-5c2fb1d7a8c0", 00:14:27.714 "is_configured": true, 00:14:27.714 "data_offset": 2048, 00:14:27.714 "data_size": 63488 00:14:27.714 }, 00:14:27.714 { 00:14:27.714 "name": "BaseBdev3", 00:14:27.714 "uuid": "47ee8698-be8f-54a3-b8e0-6d311574919b", 00:14:27.714 "is_configured": true, 00:14:27.714 "data_offset": 2048, 00:14:27.714 "data_size": 63488 00:14:27.714 } 00:14:27.714 ] 00:14:27.714 }' 00:14:27.714 04:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.714 04:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:27.714 04:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.714 04:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:27.714 04:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:29.096 04:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:29.096 04:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:29.096 04:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.096 04:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:29.096 04:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:29.096 04:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.096 04:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.096 04:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.096 04:30:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.096 04:30:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.096 04:30:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.096 04:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.096 "name": "raid_bdev1", 00:14:29.096 "uuid": "e5a80e35-ee58-4b4c-9697-86fb9e5cdcf7", 00:14:29.096 "strip_size_kb": 64, 00:14:29.096 "state": "online", 00:14:29.096 "raid_level": "raid5f", 00:14:29.096 "superblock": true, 00:14:29.096 "num_base_bdevs": 3, 00:14:29.096 "num_base_bdevs_discovered": 3, 00:14:29.096 "num_base_bdevs_operational": 3, 00:14:29.096 "process": { 00:14:29.096 "type": "rebuild", 00:14:29.096 "target": "spare", 00:14:29.096 "progress": { 00:14:29.096 "blocks": 94208, 00:14:29.096 "percent": 74 00:14:29.096 } 00:14:29.096 }, 00:14:29.096 "base_bdevs_list": [ 00:14:29.096 { 00:14:29.096 "name": "spare", 00:14:29.096 "uuid": "c1b66362-8b05-5aea-9647-55d29e1062ca", 00:14:29.096 "is_configured": true, 00:14:29.096 "data_offset": 2048, 00:14:29.096 "data_size": 63488 00:14:29.096 }, 00:14:29.096 { 00:14:29.096 "name": "BaseBdev2", 00:14:29.096 "uuid": "e1ed51b5-cf1e-565f-93bd-5c2fb1d7a8c0", 00:14:29.096 "is_configured": true, 00:14:29.096 "data_offset": 2048, 00:14:29.096 "data_size": 63488 00:14:29.096 }, 00:14:29.096 { 00:14:29.096 "name": "BaseBdev3", 00:14:29.096 "uuid": "47ee8698-be8f-54a3-b8e0-6d311574919b", 00:14:29.096 "is_configured": true, 00:14:29.096 "data_offset": 2048, 00:14:29.096 "data_size": 63488 00:14:29.096 } 00:14:29.096 ] 00:14:29.096 }' 00:14:29.096 04:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.096 04:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:29.096 04:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.096 04:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:29.096 04:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:30.036 04:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:30.036 04:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:30.036 04:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.036 04:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:30.036 04:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:30.036 04:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.036 04:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.036 04:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.036 04:30:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.036 04:30:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.036 04:30:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.036 04:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.036 "name": "raid_bdev1", 00:14:30.036 "uuid": "e5a80e35-ee58-4b4c-9697-86fb9e5cdcf7", 00:14:30.036 "strip_size_kb": 64, 00:14:30.036 "state": "online", 00:14:30.036 "raid_level": "raid5f", 00:14:30.036 "superblock": true, 00:14:30.036 "num_base_bdevs": 3, 00:14:30.036 "num_base_bdevs_discovered": 3, 00:14:30.036 "num_base_bdevs_operational": 3, 00:14:30.036 "process": { 00:14:30.036 "type": "rebuild", 00:14:30.036 "target": "spare", 00:14:30.036 "progress": { 00:14:30.036 "blocks": 116736, 00:14:30.036 "percent": 91 00:14:30.036 } 00:14:30.036 }, 00:14:30.036 "base_bdevs_list": [ 00:14:30.036 { 00:14:30.036 "name": "spare", 00:14:30.036 "uuid": "c1b66362-8b05-5aea-9647-55d29e1062ca", 00:14:30.036 "is_configured": true, 00:14:30.036 "data_offset": 2048, 00:14:30.036 "data_size": 63488 00:14:30.036 }, 00:14:30.036 { 00:14:30.036 "name": "BaseBdev2", 00:14:30.036 "uuid": "e1ed51b5-cf1e-565f-93bd-5c2fb1d7a8c0", 00:14:30.036 "is_configured": true, 00:14:30.036 "data_offset": 2048, 00:14:30.036 "data_size": 63488 00:14:30.036 }, 00:14:30.036 { 00:14:30.036 "name": "BaseBdev3", 00:14:30.036 "uuid": "47ee8698-be8f-54a3-b8e0-6d311574919b", 00:14:30.036 "is_configured": true, 00:14:30.036 "data_offset": 2048, 00:14:30.036 "data_size": 63488 00:14:30.036 } 00:14:30.036 ] 00:14:30.036 }' 00:14:30.036 04:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.036 04:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:30.036 04:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.036 04:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:30.036 04:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:30.296 [2024-12-13 04:30:30.308013] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:30.296 [2024-12-13 04:30:30.308134] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:30.296 [2024-12-13 04:30:30.308780] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:31.236 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:31.236 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:31.236 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.236 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:31.236 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:31.236 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.236 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.236 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.236 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.236 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.236 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.236 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:31.236 "name": "raid_bdev1", 00:14:31.236 "uuid": "e5a80e35-ee58-4b4c-9697-86fb9e5cdcf7", 00:14:31.236 "strip_size_kb": 64, 00:14:31.236 "state": "online", 00:14:31.236 "raid_level": "raid5f", 00:14:31.236 "superblock": true, 00:14:31.236 "num_base_bdevs": 3, 00:14:31.236 "num_base_bdevs_discovered": 3, 00:14:31.236 "num_base_bdevs_operational": 3, 00:14:31.236 "base_bdevs_list": [ 00:14:31.236 { 00:14:31.236 "name": "spare", 00:14:31.236 "uuid": "c1b66362-8b05-5aea-9647-55d29e1062ca", 00:14:31.236 "is_configured": true, 00:14:31.236 "data_offset": 2048, 00:14:31.236 "data_size": 63488 00:14:31.236 }, 00:14:31.236 { 00:14:31.236 "name": "BaseBdev2", 00:14:31.236 "uuid": "e1ed51b5-cf1e-565f-93bd-5c2fb1d7a8c0", 00:14:31.236 "is_configured": true, 00:14:31.236 "data_offset": 2048, 00:14:31.236 "data_size": 63488 00:14:31.236 }, 00:14:31.236 { 00:14:31.236 "name": "BaseBdev3", 00:14:31.236 "uuid": "47ee8698-be8f-54a3-b8e0-6d311574919b", 00:14:31.236 "is_configured": true, 00:14:31.236 "data_offset": 2048, 00:14:31.236 "data_size": 63488 00:14:31.236 } 00:14:31.236 ] 00:14:31.236 }' 00:14:31.236 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.236 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:31.236 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.236 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:31.236 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:31.236 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:31.236 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.236 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:31.236 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:31.236 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.236 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.236 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.236 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.236 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.236 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.236 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:31.236 "name": "raid_bdev1", 00:14:31.236 "uuid": "e5a80e35-ee58-4b4c-9697-86fb9e5cdcf7", 00:14:31.236 "strip_size_kb": 64, 00:14:31.236 "state": "online", 00:14:31.236 "raid_level": "raid5f", 00:14:31.236 "superblock": true, 00:14:31.236 "num_base_bdevs": 3, 00:14:31.236 "num_base_bdevs_discovered": 3, 00:14:31.236 "num_base_bdevs_operational": 3, 00:14:31.236 "base_bdevs_list": [ 00:14:31.236 { 00:14:31.236 "name": "spare", 00:14:31.236 "uuid": "c1b66362-8b05-5aea-9647-55d29e1062ca", 00:14:31.236 "is_configured": true, 00:14:31.236 "data_offset": 2048, 00:14:31.236 "data_size": 63488 00:14:31.236 }, 00:14:31.236 { 00:14:31.236 "name": "BaseBdev2", 00:14:31.236 "uuid": "e1ed51b5-cf1e-565f-93bd-5c2fb1d7a8c0", 00:14:31.236 "is_configured": true, 00:14:31.236 "data_offset": 2048, 00:14:31.236 "data_size": 63488 00:14:31.236 }, 00:14:31.236 { 00:14:31.236 "name": "BaseBdev3", 00:14:31.236 "uuid": "47ee8698-be8f-54a3-b8e0-6d311574919b", 00:14:31.236 "is_configured": true, 00:14:31.236 "data_offset": 2048, 00:14:31.236 "data_size": 63488 00:14:31.236 } 00:14:31.236 ] 00:14:31.236 }' 00:14:31.236 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.236 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:31.236 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.496 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:31.496 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:31.496 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.496 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.496 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:31.496 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.496 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:31.496 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.496 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.496 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.496 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.496 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.497 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.497 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.497 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.497 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.497 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.497 "name": "raid_bdev1", 00:14:31.497 "uuid": "e5a80e35-ee58-4b4c-9697-86fb9e5cdcf7", 00:14:31.497 "strip_size_kb": 64, 00:14:31.497 "state": "online", 00:14:31.497 "raid_level": "raid5f", 00:14:31.497 "superblock": true, 00:14:31.497 "num_base_bdevs": 3, 00:14:31.497 "num_base_bdevs_discovered": 3, 00:14:31.497 "num_base_bdevs_operational": 3, 00:14:31.497 "base_bdevs_list": [ 00:14:31.497 { 00:14:31.497 "name": "spare", 00:14:31.497 "uuid": "c1b66362-8b05-5aea-9647-55d29e1062ca", 00:14:31.497 "is_configured": true, 00:14:31.497 "data_offset": 2048, 00:14:31.497 "data_size": 63488 00:14:31.497 }, 00:14:31.497 { 00:14:31.497 "name": "BaseBdev2", 00:14:31.497 "uuid": "e1ed51b5-cf1e-565f-93bd-5c2fb1d7a8c0", 00:14:31.497 "is_configured": true, 00:14:31.497 "data_offset": 2048, 00:14:31.497 "data_size": 63488 00:14:31.497 }, 00:14:31.497 { 00:14:31.497 "name": "BaseBdev3", 00:14:31.497 "uuid": "47ee8698-be8f-54a3-b8e0-6d311574919b", 00:14:31.497 "is_configured": true, 00:14:31.497 "data_offset": 2048, 00:14:31.497 "data_size": 63488 00:14:31.497 } 00:14:31.497 ] 00:14:31.497 }' 00:14:31.497 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.497 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.757 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:31.757 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.757 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.757 [2024-12-13 04:30:31.712562] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:31.757 [2024-12-13 04:30:31.712643] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:31.757 [2024-12-13 04:30:31.712739] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:31.757 [2024-12-13 04:30:31.712851] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:31.757 [2024-12-13 04:30:31.712904] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:14:31.757 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.757 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.757 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:31.757 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.757 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.757 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.757 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:31.757 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:31.757 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:31.757 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:31.757 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:31.757 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:31.757 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:31.757 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:31.757 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:31.757 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:31.757 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:31.757 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:31.757 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:32.017 /dev/nbd0 00:14:32.017 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:32.017 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:32.017 04:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:32.017 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:32.017 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:32.017 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:32.017 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:32.017 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:32.017 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:32.017 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:32.017 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:32.017 1+0 records in 00:14:32.017 1+0 records out 00:14:32.017 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000443461 s, 9.2 MB/s 00:14:32.017 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:32.017 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:32.017 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:32.017 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:32.017 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:32.017 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:32.017 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:32.017 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:32.277 /dev/nbd1 00:14:32.277 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:32.277 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:32.277 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:32.277 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:32.277 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:32.277 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:32.277 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:32.277 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:32.277 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:32.277 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:32.277 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:32.277 1+0 records in 00:14:32.277 1+0 records out 00:14:32.277 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404821 s, 10.1 MB/s 00:14:32.277 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:32.277 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:32.277 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:32.277 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:32.277 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:32.277 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:32.277 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:32.277 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:32.537 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:32.537 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:32.537 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:32.537 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:32.537 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:32.537 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:32.537 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:32.797 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:32.797 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:32.797 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:32.797 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:32.797 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:32.797 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:32.797 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:32.797 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:32.797 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:32.797 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:32.797 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:32.797 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:32.797 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:32.797 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:32.797 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:32.797 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:32.797 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:32.797 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:32.797 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:32.797 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:32.797 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.797 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.797 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.797 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:32.797 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.797 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.798 [2024-12-13 04:30:32.805603] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:32.798 [2024-12-13 04:30:32.805666] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.798 [2024-12-13 04:30:32.805704] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:32.798 [2024-12-13 04:30:32.805714] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.798 [2024-12-13 04:30:32.808083] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.798 [2024-12-13 04:30:32.808120] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:32.798 [2024-12-13 04:30:32.808208] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:32.798 [2024-12-13 04:30:32.808252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:32.798 [2024-12-13 04:30:32.808384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:32.798 [2024-12-13 04:30:32.808520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:32.798 spare 00:14:32.798 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.798 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:32.798 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.798 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.058 [2024-12-13 04:30:32.908430] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:14:33.058 [2024-12-13 04:30:32.908491] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:33.058 [2024-12-13 04:30:32.908752] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000043d50 00:14:33.058 [2024-12-13 04:30:32.909179] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:14:33.058 [2024-12-13 04:30:32.909195] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:14:33.058 [2024-12-13 04:30:32.909329] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:33.058 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.058 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:33.058 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:33.058 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.058 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:33.058 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:33.058 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:33.058 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.058 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.058 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.058 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.058 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.058 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.058 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.058 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.058 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.058 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.058 "name": "raid_bdev1", 00:14:33.058 "uuid": "e5a80e35-ee58-4b4c-9697-86fb9e5cdcf7", 00:14:33.058 "strip_size_kb": 64, 00:14:33.058 "state": "online", 00:14:33.058 "raid_level": "raid5f", 00:14:33.058 "superblock": true, 00:14:33.058 "num_base_bdevs": 3, 00:14:33.058 "num_base_bdevs_discovered": 3, 00:14:33.058 "num_base_bdevs_operational": 3, 00:14:33.058 "base_bdevs_list": [ 00:14:33.058 { 00:14:33.058 "name": "spare", 00:14:33.058 "uuid": "c1b66362-8b05-5aea-9647-55d29e1062ca", 00:14:33.058 "is_configured": true, 00:14:33.058 "data_offset": 2048, 00:14:33.058 "data_size": 63488 00:14:33.058 }, 00:14:33.058 { 00:14:33.058 "name": "BaseBdev2", 00:14:33.058 "uuid": "e1ed51b5-cf1e-565f-93bd-5c2fb1d7a8c0", 00:14:33.058 "is_configured": true, 00:14:33.058 "data_offset": 2048, 00:14:33.058 "data_size": 63488 00:14:33.058 }, 00:14:33.058 { 00:14:33.058 "name": "BaseBdev3", 00:14:33.058 "uuid": "47ee8698-be8f-54a3-b8e0-6d311574919b", 00:14:33.058 "is_configured": true, 00:14:33.058 "data_offset": 2048, 00:14:33.058 "data_size": 63488 00:14:33.058 } 00:14:33.058 ] 00:14:33.058 }' 00:14:33.058 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.058 04:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.628 04:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:33.628 04:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.628 04:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:33.628 04:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:33.628 04:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.628 04:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.628 04:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.628 04:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.628 04:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.628 04:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.628 04:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:33.628 "name": "raid_bdev1", 00:14:33.628 "uuid": "e5a80e35-ee58-4b4c-9697-86fb9e5cdcf7", 00:14:33.628 "strip_size_kb": 64, 00:14:33.628 "state": "online", 00:14:33.628 "raid_level": "raid5f", 00:14:33.628 "superblock": true, 00:14:33.628 "num_base_bdevs": 3, 00:14:33.628 "num_base_bdevs_discovered": 3, 00:14:33.628 "num_base_bdevs_operational": 3, 00:14:33.628 "base_bdevs_list": [ 00:14:33.628 { 00:14:33.628 "name": "spare", 00:14:33.628 "uuid": "c1b66362-8b05-5aea-9647-55d29e1062ca", 00:14:33.628 "is_configured": true, 00:14:33.628 "data_offset": 2048, 00:14:33.628 "data_size": 63488 00:14:33.628 }, 00:14:33.628 { 00:14:33.628 "name": "BaseBdev2", 00:14:33.628 "uuid": "e1ed51b5-cf1e-565f-93bd-5c2fb1d7a8c0", 00:14:33.628 "is_configured": true, 00:14:33.628 "data_offset": 2048, 00:14:33.628 "data_size": 63488 00:14:33.628 }, 00:14:33.628 { 00:14:33.628 "name": "BaseBdev3", 00:14:33.628 "uuid": "47ee8698-be8f-54a3-b8e0-6d311574919b", 00:14:33.628 "is_configured": true, 00:14:33.628 "data_offset": 2048, 00:14:33.628 "data_size": 63488 00:14:33.628 } 00:14:33.628 ] 00:14:33.628 }' 00:14:33.628 04:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:33.628 04:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:33.628 04:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:33.628 04:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:33.628 04:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:33.628 04:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.628 04:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.628 04:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.628 04:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.628 04:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:33.628 04:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:33.628 04:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.628 04:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.628 [2024-12-13 04:30:33.592538] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:33.628 04:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.628 04:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:33.628 04:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:33.628 04:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.628 04:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:33.628 04:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:33.628 04:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:33.628 04:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.628 04:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.628 04:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.628 04:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.628 04:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.628 04:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.628 04:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.628 04:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.628 04:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.628 04:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.628 "name": "raid_bdev1", 00:14:33.628 "uuid": "e5a80e35-ee58-4b4c-9697-86fb9e5cdcf7", 00:14:33.628 "strip_size_kb": 64, 00:14:33.628 "state": "online", 00:14:33.628 "raid_level": "raid5f", 00:14:33.628 "superblock": true, 00:14:33.628 "num_base_bdevs": 3, 00:14:33.628 "num_base_bdevs_discovered": 2, 00:14:33.628 "num_base_bdevs_operational": 2, 00:14:33.628 "base_bdevs_list": [ 00:14:33.628 { 00:14:33.628 "name": null, 00:14:33.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.628 "is_configured": false, 00:14:33.628 "data_offset": 0, 00:14:33.628 "data_size": 63488 00:14:33.628 }, 00:14:33.628 { 00:14:33.628 "name": "BaseBdev2", 00:14:33.628 "uuid": "e1ed51b5-cf1e-565f-93bd-5c2fb1d7a8c0", 00:14:33.628 "is_configured": true, 00:14:33.629 "data_offset": 2048, 00:14:33.629 "data_size": 63488 00:14:33.629 }, 00:14:33.629 { 00:14:33.629 "name": "BaseBdev3", 00:14:33.629 "uuid": "47ee8698-be8f-54a3-b8e0-6d311574919b", 00:14:33.629 "is_configured": true, 00:14:33.629 "data_offset": 2048, 00:14:33.629 "data_size": 63488 00:14:33.629 } 00:14:33.629 ] 00:14:33.629 }' 00:14:33.629 04:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.629 04:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.198 04:30:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:34.198 04:30:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.198 04:30:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.198 [2024-12-13 04:30:34.063972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:34.198 [2024-12-13 04:30:34.064162] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:34.198 [2024-12-13 04:30:34.064245] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:34.198 [2024-12-13 04:30:34.064324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:34.198 [2024-12-13 04:30:34.072112] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000043e20 00:14:34.198 04:30:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.198 04:30:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:34.198 [2024-12-13 04:30:34.074612] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:35.138 04:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:35.138 04:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:35.138 04:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:35.138 04:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:35.138 04:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:35.138 04:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.138 04:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.138 04:30:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.138 04:30:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.138 04:30:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.138 04:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:35.138 "name": "raid_bdev1", 00:14:35.138 "uuid": "e5a80e35-ee58-4b4c-9697-86fb9e5cdcf7", 00:14:35.138 "strip_size_kb": 64, 00:14:35.138 "state": "online", 00:14:35.138 "raid_level": "raid5f", 00:14:35.138 "superblock": true, 00:14:35.138 "num_base_bdevs": 3, 00:14:35.138 "num_base_bdevs_discovered": 3, 00:14:35.138 "num_base_bdevs_operational": 3, 00:14:35.138 "process": { 00:14:35.138 "type": "rebuild", 00:14:35.138 "target": "spare", 00:14:35.138 "progress": { 00:14:35.138 "blocks": 20480, 00:14:35.138 "percent": 16 00:14:35.138 } 00:14:35.138 }, 00:14:35.138 "base_bdevs_list": [ 00:14:35.138 { 00:14:35.138 "name": "spare", 00:14:35.138 "uuid": "c1b66362-8b05-5aea-9647-55d29e1062ca", 00:14:35.138 "is_configured": true, 00:14:35.138 "data_offset": 2048, 00:14:35.138 "data_size": 63488 00:14:35.138 }, 00:14:35.138 { 00:14:35.138 "name": "BaseBdev2", 00:14:35.138 "uuid": "e1ed51b5-cf1e-565f-93bd-5c2fb1d7a8c0", 00:14:35.138 "is_configured": true, 00:14:35.138 "data_offset": 2048, 00:14:35.138 "data_size": 63488 00:14:35.138 }, 00:14:35.138 { 00:14:35.138 "name": "BaseBdev3", 00:14:35.138 "uuid": "47ee8698-be8f-54a3-b8e0-6d311574919b", 00:14:35.138 "is_configured": true, 00:14:35.138 "data_offset": 2048, 00:14:35.138 "data_size": 63488 00:14:35.138 } 00:14:35.138 ] 00:14:35.138 }' 00:14:35.138 04:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:35.398 04:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:35.398 04:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:35.398 04:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:35.398 04:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:35.398 04:30:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.398 04:30:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.398 [2024-12-13 04:30:35.213963] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:35.398 [2024-12-13 04:30:35.282620] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:35.398 [2024-12-13 04:30:35.282668] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:35.398 [2024-12-13 04:30:35.282687] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:35.398 [2024-12-13 04:30:35.282695] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:35.398 04:30:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.398 04:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:35.398 04:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:35.398 04:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:35.398 04:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:35.398 04:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:35.398 04:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:35.398 04:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.398 04:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.398 04:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.398 04:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.398 04:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.398 04:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.398 04:30:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.398 04:30:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.398 04:30:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.398 04:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.398 "name": "raid_bdev1", 00:14:35.398 "uuid": "e5a80e35-ee58-4b4c-9697-86fb9e5cdcf7", 00:14:35.398 "strip_size_kb": 64, 00:14:35.398 "state": "online", 00:14:35.398 "raid_level": "raid5f", 00:14:35.398 "superblock": true, 00:14:35.398 "num_base_bdevs": 3, 00:14:35.398 "num_base_bdevs_discovered": 2, 00:14:35.398 "num_base_bdevs_operational": 2, 00:14:35.398 "base_bdevs_list": [ 00:14:35.398 { 00:14:35.398 "name": null, 00:14:35.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.398 "is_configured": false, 00:14:35.398 "data_offset": 0, 00:14:35.398 "data_size": 63488 00:14:35.398 }, 00:14:35.398 { 00:14:35.398 "name": "BaseBdev2", 00:14:35.398 "uuid": "e1ed51b5-cf1e-565f-93bd-5c2fb1d7a8c0", 00:14:35.398 "is_configured": true, 00:14:35.398 "data_offset": 2048, 00:14:35.398 "data_size": 63488 00:14:35.398 }, 00:14:35.398 { 00:14:35.398 "name": "BaseBdev3", 00:14:35.398 "uuid": "47ee8698-be8f-54a3-b8e0-6d311574919b", 00:14:35.398 "is_configured": true, 00:14:35.398 "data_offset": 2048, 00:14:35.398 "data_size": 63488 00:14:35.398 } 00:14:35.398 ] 00:14:35.398 }' 00:14:35.398 04:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.398 04:30:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.968 04:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:35.968 04:30:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.968 04:30:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.968 [2024-12-13 04:30:35.738730] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:35.968 [2024-12-13 04:30:35.738840] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.968 [2024-12-13 04:30:35.738880] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:35.968 [2024-12-13 04:30:35.738907] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.968 [2024-12-13 04:30:35.739445] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.968 [2024-12-13 04:30:35.739512] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:35.968 [2024-12-13 04:30:35.739620] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:35.968 [2024-12-13 04:30:35.739658] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:35.968 [2024-12-13 04:30:35.739725] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:35.968 [2024-12-13 04:30:35.739782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:35.968 [2024-12-13 04:30:35.744966] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000043ef0 00:14:35.968 spare 00:14:35.968 04:30:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.968 04:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:35.968 [2024-12-13 04:30:35.747410] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:36.908 04:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:36.908 04:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:36.908 04:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:36.908 04:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:36.908 04:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:36.908 04:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.908 04:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.908 04:30:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.908 04:30:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.908 04:30:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.908 04:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:36.908 "name": "raid_bdev1", 00:14:36.908 "uuid": "e5a80e35-ee58-4b4c-9697-86fb9e5cdcf7", 00:14:36.908 "strip_size_kb": 64, 00:14:36.908 "state": "online", 00:14:36.908 "raid_level": "raid5f", 00:14:36.908 "superblock": true, 00:14:36.908 "num_base_bdevs": 3, 00:14:36.908 "num_base_bdevs_discovered": 3, 00:14:36.908 "num_base_bdevs_operational": 3, 00:14:36.908 "process": { 00:14:36.908 "type": "rebuild", 00:14:36.908 "target": "spare", 00:14:36.908 "progress": { 00:14:36.908 "blocks": 20480, 00:14:36.908 "percent": 16 00:14:36.908 } 00:14:36.908 }, 00:14:36.908 "base_bdevs_list": [ 00:14:36.908 { 00:14:36.908 "name": "spare", 00:14:36.908 "uuid": "c1b66362-8b05-5aea-9647-55d29e1062ca", 00:14:36.908 "is_configured": true, 00:14:36.908 "data_offset": 2048, 00:14:36.908 "data_size": 63488 00:14:36.908 }, 00:14:36.908 { 00:14:36.908 "name": "BaseBdev2", 00:14:36.908 "uuid": "e1ed51b5-cf1e-565f-93bd-5c2fb1d7a8c0", 00:14:36.908 "is_configured": true, 00:14:36.908 "data_offset": 2048, 00:14:36.908 "data_size": 63488 00:14:36.908 }, 00:14:36.908 { 00:14:36.908 "name": "BaseBdev3", 00:14:36.908 "uuid": "47ee8698-be8f-54a3-b8e0-6d311574919b", 00:14:36.908 "is_configured": true, 00:14:36.908 "data_offset": 2048, 00:14:36.908 "data_size": 63488 00:14:36.908 } 00:14:36.908 ] 00:14:36.908 }' 00:14:36.908 04:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:36.908 04:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:36.908 04:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:36.908 04:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:36.908 04:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:36.908 04:30:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.908 04:30:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.908 [2024-12-13 04:30:36.911224] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:37.168 [2024-12-13 04:30:36.955348] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:37.168 [2024-12-13 04:30:36.955401] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.168 [2024-12-13 04:30:36.955416] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:37.168 [2024-12-13 04:30:36.955429] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:37.168 04:30:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.168 04:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:37.168 04:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:37.168 04:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.168 04:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:37.168 04:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:37.168 04:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:37.168 04:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.168 04:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.168 04:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.168 04:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.168 04:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.168 04:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.168 04:30:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.168 04:30:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.168 04:30:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.168 04:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.168 "name": "raid_bdev1", 00:14:37.168 "uuid": "e5a80e35-ee58-4b4c-9697-86fb9e5cdcf7", 00:14:37.168 "strip_size_kb": 64, 00:14:37.168 "state": "online", 00:14:37.168 "raid_level": "raid5f", 00:14:37.168 "superblock": true, 00:14:37.168 "num_base_bdevs": 3, 00:14:37.168 "num_base_bdevs_discovered": 2, 00:14:37.168 "num_base_bdevs_operational": 2, 00:14:37.168 "base_bdevs_list": [ 00:14:37.168 { 00:14:37.168 "name": null, 00:14:37.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.168 "is_configured": false, 00:14:37.168 "data_offset": 0, 00:14:37.168 "data_size": 63488 00:14:37.168 }, 00:14:37.168 { 00:14:37.168 "name": "BaseBdev2", 00:14:37.168 "uuid": "e1ed51b5-cf1e-565f-93bd-5c2fb1d7a8c0", 00:14:37.168 "is_configured": true, 00:14:37.168 "data_offset": 2048, 00:14:37.168 "data_size": 63488 00:14:37.168 }, 00:14:37.168 { 00:14:37.168 "name": "BaseBdev3", 00:14:37.168 "uuid": "47ee8698-be8f-54a3-b8e0-6d311574919b", 00:14:37.168 "is_configured": true, 00:14:37.168 "data_offset": 2048, 00:14:37.168 "data_size": 63488 00:14:37.168 } 00:14:37.168 ] 00:14:37.168 }' 00:14:37.168 04:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.168 04:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.738 04:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:37.738 04:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.738 04:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:37.738 04:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:37.738 04:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.738 04:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.738 04:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.738 04:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.738 04:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.738 04:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.738 04:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.738 "name": "raid_bdev1", 00:14:37.738 "uuid": "e5a80e35-ee58-4b4c-9697-86fb9e5cdcf7", 00:14:37.738 "strip_size_kb": 64, 00:14:37.738 "state": "online", 00:14:37.738 "raid_level": "raid5f", 00:14:37.738 "superblock": true, 00:14:37.738 "num_base_bdevs": 3, 00:14:37.738 "num_base_bdevs_discovered": 2, 00:14:37.738 "num_base_bdevs_operational": 2, 00:14:37.738 "base_bdevs_list": [ 00:14:37.738 { 00:14:37.738 "name": null, 00:14:37.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.738 "is_configured": false, 00:14:37.738 "data_offset": 0, 00:14:37.738 "data_size": 63488 00:14:37.738 }, 00:14:37.738 { 00:14:37.738 "name": "BaseBdev2", 00:14:37.738 "uuid": "e1ed51b5-cf1e-565f-93bd-5c2fb1d7a8c0", 00:14:37.738 "is_configured": true, 00:14:37.738 "data_offset": 2048, 00:14:37.738 "data_size": 63488 00:14:37.738 }, 00:14:37.738 { 00:14:37.738 "name": "BaseBdev3", 00:14:37.738 "uuid": "47ee8698-be8f-54a3-b8e0-6d311574919b", 00:14:37.738 "is_configured": true, 00:14:37.738 "data_offset": 2048, 00:14:37.738 "data_size": 63488 00:14:37.738 } 00:14:37.738 ] 00:14:37.738 }' 00:14:37.738 04:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.738 04:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:37.738 04:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.738 04:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:37.738 04:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:37.739 04:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.739 04:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.739 04:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.739 04:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:37.739 04:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.739 04:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.739 [2024-12-13 04:30:37.622895] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:37.739 [2024-12-13 04:30:37.622944] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.739 [2024-12-13 04:30:37.622963] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:37.739 [2024-12-13 04:30:37.622974] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.739 [2024-12-13 04:30:37.623402] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.739 [2024-12-13 04:30:37.623422] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:37.739 [2024-12-13 04:30:37.623501] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:37.739 [2024-12-13 04:30:37.623527] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:37.739 [2024-12-13 04:30:37.623536] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:37.739 [2024-12-13 04:30:37.623548] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:37.739 BaseBdev1 00:14:37.739 04:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.739 04:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:38.678 04:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:38.678 04:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:38.678 04:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:38.678 04:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:38.678 04:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.678 04:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:38.678 04:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.678 04:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.679 04:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.679 04:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.679 04:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.679 04:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.679 04:30:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.679 04:30:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.679 04:30:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.679 04:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.679 "name": "raid_bdev1", 00:14:38.679 "uuid": "e5a80e35-ee58-4b4c-9697-86fb9e5cdcf7", 00:14:38.679 "strip_size_kb": 64, 00:14:38.679 "state": "online", 00:14:38.679 "raid_level": "raid5f", 00:14:38.679 "superblock": true, 00:14:38.679 "num_base_bdevs": 3, 00:14:38.679 "num_base_bdevs_discovered": 2, 00:14:38.679 "num_base_bdevs_operational": 2, 00:14:38.679 "base_bdevs_list": [ 00:14:38.679 { 00:14:38.679 "name": null, 00:14:38.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.679 "is_configured": false, 00:14:38.679 "data_offset": 0, 00:14:38.679 "data_size": 63488 00:14:38.679 }, 00:14:38.679 { 00:14:38.679 "name": "BaseBdev2", 00:14:38.679 "uuid": "e1ed51b5-cf1e-565f-93bd-5c2fb1d7a8c0", 00:14:38.679 "is_configured": true, 00:14:38.679 "data_offset": 2048, 00:14:38.679 "data_size": 63488 00:14:38.679 }, 00:14:38.679 { 00:14:38.679 "name": "BaseBdev3", 00:14:38.679 "uuid": "47ee8698-be8f-54a3-b8e0-6d311574919b", 00:14:38.679 "is_configured": true, 00:14:38.679 "data_offset": 2048, 00:14:38.679 "data_size": 63488 00:14:38.679 } 00:14:38.679 ] 00:14:38.679 }' 00:14:38.679 04:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.679 04:30:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.249 04:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:39.249 04:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.249 04:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:39.249 04:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:39.249 04:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.249 04:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.249 04:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.249 04:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.249 04:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.249 04:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.249 04:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.249 "name": "raid_bdev1", 00:14:39.249 "uuid": "e5a80e35-ee58-4b4c-9697-86fb9e5cdcf7", 00:14:39.249 "strip_size_kb": 64, 00:14:39.249 "state": "online", 00:14:39.249 "raid_level": "raid5f", 00:14:39.249 "superblock": true, 00:14:39.249 "num_base_bdevs": 3, 00:14:39.249 "num_base_bdevs_discovered": 2, 00:14:39.249 "num_base_bdevs_operational": 2, 00:14:39.249 "base_bdevs_list": [ 00:14:39.249 { 00:14:39.249 "name": null, 00:14:39.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.249 "is_configured": false, 00:14:39.249 "data_offset": 0, 00:14:39.249 "data_size": 63488 00:14:39.249 }, 00:14:39.249 { 00:14:39.249 "name": "BaseBdev2", 00:14:39.249 "uuid": "e1ed51b5-cf1e-565f-93bd-5c2fb1d7a8c0", 00:14:39.249 "is_configured": true, 00:14:39.249 "data_offset": 2048, 00:14:39.249 "data_size": 63488 00:14:39.249 }, 00:14:39.249 { 00:14:39.249 "name": "BaseBdev3", 00:14:39.249 "uuid": "47ee8698-be8f-54a3-b8e0-6d311574919b", 00:14:39.249 "is_configured": true, 00:14:39.249 "data_offset": 2048, 00:14:39.249 "data_size": 63488 00:14:39.249 } 00:14:39.249 ] 00:14:39.249 }' 00:14:39.249 04:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.249 04:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:39.249 04:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.249 04:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:39.249 04:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:39.249 04:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:14:39.249 04:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:39.249 04:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:39.249 04:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:39.249 04:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:39.249 04:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:39.249 04:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:39.249 04:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.249 04:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.249 [2024-12-13 04:30:39.244183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:39.249 [2024-12-13 04:30:39.244353] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:39.249 [2024-12-13 04:30:39.244372] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:39.249 request: 00:14:39.249 { 00:14:39.249 "base_bdev": "BaseBdev1", 00:14:39.249 "raid_bdev": "raid_bdev1", 00:14:39.249 "method": "bdev_raid_add_base_bdev", 00:14:39.249 "req_id": 1 00:14:39.249 } 00:14:39.249 Got JSON-RPC error response 00:14:39.249 response: 00:14:39.249 { 00:14:39.249 "code": -22, 00:14:39.249 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:39.249 } 00:14:39.249 04:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:39.249 04:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:14:39.249 04:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:39.249 04:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:39.249 04:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:39.249 04:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:40.629 04:30:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:40.629 04:30:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:40.629 04:30:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:40.629 04:30:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:40.629 04:30:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.629 04:30:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:40.629 04:30:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.629 04:30:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.629 04:30:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.629 04:30:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.629 04:30:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.629 04:30:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.629 04:30:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.629 04:30:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.629 04:30:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.629 04:30:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.629 "name": "raid_bdev1", 00:14:40.629 "uuid": "e5a80e35-ee58-4b4c-9697-86fb9e5cdcf7", 00:14:40.629 "strip_size_kb": 64, 00:14:40.629 "state": "online", 00:14:40.629 "raid_level": "raid5f", 00:14:40.629 "superblock": true, 00:14:40.629 "num_base_bdevs": 3, 00:14:40.629 "num_base_bdevs_discovered": 2, 00:14:40.629 "num_base_bdevs_operational": 2, 00:14:40.629 "base_bdevs_list": [ 00:14:40.629 { 00:14:40.629 "name": null, 00:14:40.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.629 "is_configured": false, 00:14:40.629 "data_offset": 0, 00:14:40.629 "data_size": 63488 00:14:40.629 }, 00:14:40.629 { 00:14:40.629 "name": "BaseBdev2", 00:14:40.629 "uuid": "e1ed51b5-cf1e-565f-93bd-5c2fb1d7a8c0", 00:14:40.629 "is_configured": true, 00:14:40.629 "data_offset": 2048, 00:14:40.629 "data_size": 63488 00:14:40.629 }, 00:14:40.629 { 00:14:40.629 "name": "BaseBdev3", 00:14:40.629 "uuid": "47ee8698-be8f-54a3-b8e0-6d311574919b", 00:14:40.629 "is_configured": true, 00:14:40.629 "data_offset": 2048, 00:14:40.629 "data_size": 63488 00:14:40.629 } 00:14:40.629 ] 00:14:40.629 }' 00:14:40.629 04:30:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.629 04:30:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.889 04:30:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:40.889 04:30:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.889 04:30:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:40.889 04:30:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:40.889 04:30:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.889 04:30:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.889 04:30:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.889 04:30:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.889 04:30:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.889 04:30:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.889 04:30:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.889 "name": "raid_bdev1", 00:14:40.889 "uuid": "e5a80e35-ee58-4b4c-9697-86fb9e5cdcf7", 00:14:40.889 "strip_size_kb": 64, 00:14:40.889 "state": "online", 00:14:40.889 "raid_level": "raid5f", 00:14:40.889 "superblock": true, 00:14:40.889 "num_base_bdevs": 3, 00:14:40.889 "num_base_bdevs_discovered": 2, 00:14:40.889 "num_base_bdevs_operational": 2, 00:14:40.889 "base_bdevs_list": [ 00:14:40.889 { 00:14:40.889 "name": null, 00:14:40.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.889 "is_configured": false, 00:14:40.889 "data_offset": 0, 00:14:40.889 "data_size": 63488 00:14:40.889 }, 00:14:40.889 { 00:14:40.889 "name": "BaseBdev2", 00:14:40.889 "uuid": "e1ed51b5-cf1e-565f-93bd-5c2fb1d7a8c0", 00:14:40.889 "is_configured": true, 00:14:40.889 "data_offset": 2048, 00:14:40.889 "data_size": 63488 00:14:40.889 }, 00:14:40.889 { 00:14:40.889 "name": "BaseBdev3", 00:14:40.889 "uuid": "47ee8698-be8f-54a3-b8e0-6d311574919b", 00:14:40.889 "is_configured": true, 00:14:40.889 "data_offset": 2048, 00:14:40.889 "data_size": 63488 00:14:40.889 } 00:14:40.889 ] 00:14:40.889 }' 00:14:40.889 04:30:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.889 04:30:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:40.889 04:30:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.889 04:30:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:40.889 04:30:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 94253 00:14:40.889 04:30:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 94253 ']' 00:14:40.889 04:30:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 94253 00:14:40.889 04:30:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:40.889 04:30:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:40.889 04:30:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94253 00:14:41.149 04:30:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:41.149 04:30:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:41.149 04:30:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94253' 00:14:41.149 killing process with pid 94253 00:14:41.149 Received shutdown signal, test time was about 60.000000 seconds 00:14:41.149 00:14:41.149 Latency(us) 00:14:41.149 [2024-12-13T04:30:41.164Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:41.149 [2024-12-13T04:30:41.164Z] =================================================================================================================== 00:14:41.149 [2024-12-13T04:30:41.164Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:41.149 04:30:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 94253 00:14:41.149 [2024-12-13 04:30:40.925114] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:41.149 [2024-12-13 04:30:40.925222] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:41.149 04:30:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 94253 00:14:41.149 [2024-12-13 04:30:40.925280] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:41.149 [2024-12-13 04:30:40.925289] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:14:41.149 [2024-12-13 04:30:41.001702] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:41.409 04:30:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:41.409 00:14:41.409 real 0m22.095s 00:14:41.409 user 0m28.762s 00:14:41.409 sys 0m2.939s 00:14:41.409 ************************************ 00:14:41.409 END TEST raid5f_rebuild_test_sb 00:14:41.409 ************************************ 00:14:41.409 04:30:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:41.409 04:30:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.409 04:30:41 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:14:41.409 04:30:41 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:14:41.409 04:30:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:41.409 04:30:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:41.409 04:30:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:41.409 ************************************ 00:14:41.409 START TEST raid5f_state_function_test 00:14:41.409 ************************************ 00:14:41.409 04:30:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:14:41.409 04:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:41.409 04:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:41.409 04:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:41.409 04:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:41.409 04:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:41.409 04:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:41.409 04:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:41.409 04:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:41.409 04:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:41.409 04:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:41.409 04:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:41.409 04:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:41.409 04:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:41.409 04:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:41.409 04:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:41.409 04:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:41.409 04:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:41.409 04:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:41.409 04:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:41.409 04:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:41.409 04:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:41.409 04:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:41.409 04:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:41.409 04:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:41.409 04:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:41.409 04:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:41.409 04:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:41.409 04:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:41.410 04:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:41.410 04:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=94995 00:14:41.410 04:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:41.410 04:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 94995' 00:14:41.410 Process raid pid: 94995 00:14:41.410 04:30:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 94995 00:14:41.410 04:30:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 94995 ']' 00:14:41.410 04:30:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.410 04:30:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:41.410 04:30:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.410 04:30:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:41.410 04:30:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.669 [2024-12-13 04:30:41.498573] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:14:41.669 [2024-12-13 04:30:41.498759] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:41.669 [2024-12-13 04:30:41.656267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.927 [2024-12-13 04:30:41.695911] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.927 [2024-12-13 04:30:41.773350] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:41.927 [2024-12-13 04:30:41.773506] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:42.497 04:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:42.497 04:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:42.497 04:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:42.497 04:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.497 04:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.497 [2024-12-13 04:30:42.328796] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:42.497 [2024-12-13 04:30:42.328926] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:42.497 [2024-12-13 04:30:42.328960] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:42.497 [2024-12-13 04:30:42.328984] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:42.497 [2024-12-13 04:30:42.329002] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:42.497 [2024-12-13 04:30:42.329028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:42.497 [2024-12-13 04:30:42.329045] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:42.497 [2024-12-13 04:30:42.329082] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:42.497 04:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.497 04:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:42.497 04:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:42.497 04:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:42.497 04:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:42.497 04:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.497 04:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:42.497 04:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.497 04:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.497 04:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.497 04:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.497 04:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.497 04:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.497 04:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.497 04:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.497 04:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.497 04:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.497 "name": "Existed_Raid", 00:14:42.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.497 "strip_size_kb": 64, 00:14:42.497 "state": "configuring", 00:14:42.497 "raid_level": "raid5f", 00:14:42.497 "superblock": false, 00:14:42.497 "num_base_bdevs": 4, 00:14:42.497 "num_base_bdevs_discovered": 0, 00:14:42.497 "num_base_bdevs_operational": 4, 00:14:42.497 "base_bdevs_list": [ 00:14:42.497 { 00:14:42.497 "name": "BaseBdev1", 00:14:42.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.497 "is_configured": false, 00:14:42.497 "data_offset": 0, 00:14:42.497 "data_size": 0 00:14:42.497 }, 00:14:42.497 { 00:14:42.497 "name": "BaseBdev2", 00:14:42.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.497 "is_configured": false, 00:14:42.497 "data_offset": 0, 00:14:42.497 "data_size": 0 00:14:42.497 }, 00:14:42.497 { 00:14:42.497 "name": "BaseBdev3", 00:14:42.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.497 "is_configured": false, 00:14:42.497 "data_offset": 0, 00:14:42.497 "data_size": 0 00:14:42.497 }, 00:14:42.497 { 00:14:42.497 "name": "BaseBdev4", 00:14:42.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.497 "is_configured": false, 00:14:42.497 "data_offset": 0, 00:14:42.497 "data_size": 0 00:14:42.497 } 00:14:42.497 ] 00:14:42.497 }' 00:14:42.497 04:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.498 04:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.068 04:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:43.068 04:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.068 04:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.068 [2024-12-13 04:30:42.827755] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:43.068 [2024-12-13 04:30:42.827848] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:14:43.068 04:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.068 04:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:43.068 04:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.068 04:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.068 [2024-12-13 04:30:42.839767] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:43.068 [2024-12-13 04:30:42.839857] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:43.068 [2024-12-13 04:30:42.839882] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:43.068 [2024-12-13 04:30:42.839905] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:43.068 [2024-12-13 04:30:42.839921] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:43.068 [2024-12-13 04:30:42.839942] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:43.068 [2024-12-13 04:30:42.839958] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:43.068 [2024-12-13 04:30:42.839978] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:43.068 04:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.068 04:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:43.068 04:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.068 04:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.068 [2024-12-13 04:30:42.866797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:43.068 BaseBdev1 00:14:43.068 04:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.068 04:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:43.068 04:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:43.068 04:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:43.068 04:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:43.068 04:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:43.068 04:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:43.068 04:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:43.068 04:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.068 04:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.068 04:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.068 04:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:43.068 04:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.068 04:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.068 [ 00:14:43.068 { 00:14:43.068 "name": "BaseBdev1", 00:14:43.068 "aliases": [ 00:14:43.068 "0384a1bf-dd7f-46ea-a70d-b0c36fb82872" 00:14:43.068 ], 00:14:43.068 "product_name": "Malloc disk", 00:14:43.068 "block_size": 512, 00:14:43.068 "num_blocks": 65536, 00:14:43.068 "uuid": "0384a1bf-dd7f-46ea-a70d-b0c36fb82872", 00:14:43.068 "assigned_rate_limits": { 00:14:43.068 "rw_ios_per_sec": 0, 00:14:43.068 "rw_mbytes_per_sec": 0, 00:14:43.068 "r_mbytes_per_sec": 0, 00:14:43.068 "w_mbytes_per_sec": 0 00:14:43.068 }, 00:14:43.068 "claimed": true, 00:14:43.068 "claim_type": "exclusive_write", 00:14:43.068 "zoned": false, 00:14:43.068 "supported_io_types": { 00:14:43.068 "read": true, 00:14:43.068 "write": true, 00:14:43.068 "unmap": true, 00:14:43.068 "flush": true, 00:14:43.068 "reset": true, 00:14:43.068 "nvme_admin": false, 00:14:43.068 "nvme_io": false, 00:14:43.068 "nvme_io_md": false, 00:14:43.068 "write_zeroes": true, 00:14:43.068 "zcopy": true, 00:14:43.068 "get_zone_info": false, 00:14:43.068 "zone_management": false, 00:14:43.068 "zone_append": false, 00:14:43.068 "compare": false, 00:14:43.068 "compare_and_write": false, 00:14:43.068 "abort": true, 00:14:43.068 "seek_hole": false, 00:14:43.068 "seek_data": false, 00:14:43.068 "copy": true, 00:14:43.068 "nvme_iov_md": false 00:14:43.068 }, 00:14:43.068 "memory_domains": [ 00:14:43.068 { 00:14:43.068 "dma_device_id": "system", 00:14:43.068 "dma_device_type": 1 00:14:43.068 }, 00:14:43.068 { 00:14:43.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.068 "dma_device_type": 2 00:14:43.068 } 00:14:43.068 ], 00:14:43.068 "driver_specific": {} 00:14:43.068 } 00:14:43.068 ] 00:14:43.068 04:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.068 04:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:43.068 04:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:43.068 04:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:43.068 04:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:43.068 04:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:43.068 04:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:43.068 04:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:43.068 04:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.068 04:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.068 04:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.068 04:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.068 04:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.068 04:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.068 04:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.068 04:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.068 04:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.068 04:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.068 "name": "Existed_Raid", 00:14:43.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.068 "strip_size_kb": 64, 00:14:43.068 "state": "configuring", 00:14:43.068 "raid_level": "raid5f", 00:14:43.068 "superblock": false, 00:14:43.068 "num_base_bdevs": 4, 00:14:43.068 "num_base_bdevs_discovered": 1, 00:14:43.068 "num_base_bdevs_operational": 4, 00:14:43.068 "base_bdevs_list": [ 00:14:43.068 { 00:14:43.068 "name": "BaseBdev1", 00:14:43.068 "uuid": "0384a1bf-dd7f-46ea-a70d-b0c36fb82872", 00:14:43.068 "is_configured": true, 00:14:43.068 "data_offset": 0, 00:14:43.068 "data_size": 65536 00:14:43.068 }, 00:14:43.068 { 00:14:43.068 "name": "BaseBdev2", 00:14:43.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.068 "is_configured": false, 00:14:43.068 "data_offset": 0, 00:14:43.068 "data_size": 0 00:14:43.068 }, 00:14:43.068 { 00:14:43.068 "name": "BaseBdev3", 00:14:43.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.068 "is_configured": false, 00:14:43.068 "data_offset": 0, 00:14:43.068 "data_size": 0 00:14:43.068 }, 00:14:43.068 { 00:14:43.068 "name": "BaseBdev4", 00:14:43.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.068 "is_configured": false, 00:14:43.068 "data_offset": 0, 00:14:43.068 "data_size": 0 00:14:43.068 } 00:14:43.068 ] 00:14:43.068 }' 00:14:43.068 04:30:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.068 04:30:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.328 04:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:43.328 04:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.328 04:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.328 [2024-12-13 04:30:43.326095] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:43.328 [2024-12-13 04:30:43.326175] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:14:43.328 04:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.328 04:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:43.328 04:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.328 04:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.328 [2024-12-13 04:30:43.338110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:43.328 [2024-12-13 04:30:43.340237] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:43.328 [2024-12-13 04:30:43.340310] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:43.329 [2024-12-13 04:30:43.340337] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:43.329 [2024-12-13 04:30:43.340357] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:43.329 [2024-12-13 04:30:43.340374] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:43.329 [2024-12-13 04:30:43.340392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:43.329 04:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.329 04:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:43.329 04:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:43.588 04:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:43.588 04:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:43.588 04:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:43.588 04:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:43.588 04:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:43.588 04:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:43.588 04:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.588 04:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.588 04:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.588 04:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.588 04:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.588 04:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.588 04:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.589 04:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.589 04:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.589 04:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.589 "name": "Existed_Raid", 00:14:43.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.589 "strip_size_kb": 64, 00:14:43.589 "state": "configuring", 00:14:43.589 "raid_level": "raid5f", 00:14:43.589 "superblock": false, 00:14:43.589 "num_base_bdevs": 4, 00:14:43.589 "num_base_bdevs_discovered": 1, 00:14:43.589 "num_base_bdevs_operational": 4, 00:14:43.589 "base_bdevs_list": [ 00:14:43.589 { 00:14:43.589 "name": "BaseBdev1", 00:14:43.589 "uuid": "0384a1bf-dd7f-46ea-a70d-b0c36fb82872", 00:14:43.589 "is_configured": true, 00:14:43.589 "data_offset": 0, 00:14:43.589 "data_size": 65536 00:14:43.589 }, 00:14:43.589 { 00:14:43.589 "name": "BaseBdev2", 00:14:43.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.589 "is_configured": false, 00:14:43.589 "data_offset": 0, 00:14:43.589 "data_size": 0 00:14:43.589 }, 00:14:43.589 { 00:14:43.589 "name": "BaseBdev3", 00:14:43.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.589 "is_configured": false, 00:14:43.589 "data_offset": 0, 00:14:43.589 "data_size": 0 00:14:43.589 }, 00:14:43.589 { 00:14:43.589 "name": "BaseBdev4", 00:14:43.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.589 "is_configured": false, 00:14:43.589 "data_offset": 0, 00:14:43.589 "data_size": 0 00:14:43.589 } 00:14:43.589 ] 00:14:43.589 }' 00:14:43.589 04:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.589 04:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.848 04:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:43.848 04:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.848 04:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.848 [2024-12-13 04:30:43.794161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:43.848 BaseBdev2 00:14:43.848 04:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.848 04:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:43.848 04:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:43.848 04:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:43.848 04:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:43.848 04:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:43.848 04:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:43.848 04:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:43.848 04:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.848 04:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.848 04:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.848 04:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:43.848 04:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.848 04:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.848 [ 00:14:43.848 { 00:14:43.848 "name": "BaseBdev2", 00:14:43.848 "aliases": [ 00:14:43.848 "03a41592-e129-42f5-9856-beb29e8608ce" 00:14:43.848 ], 00:14:43.848 "product_name": "Malloc disk", 00:14:43.848 "block_size": 512, 00:14:43.848 "num_blocks": 65536, 00:14:43.848 "uuid": "03a41592-e129-42f5-9856-beb29e8608ce", 00:14:43.848 "assigned_rate_limits": { 00:14:43.848 "rw_ios_per_sec": 0, 00:14:43.848 "rw_mbytes_per_sec": 0, 00:14:43.848 "r_mbytes_per_sec": 0, 00:14:43.848 "w_mbytes_per_sec": 0 00:14:43.848 }, 00:14:43.848 "claimed": true, 00:14:43.848 "claim_type": "exclusive_write", 00:14:43.848 "zoned": false, 00:14:43.848 "supported_io_types": { 00:14:43.848 "read": true, 00:14:43.848 "write": true, 00:14:43.848 "unmap": true, 00:14:43.848 "flush": true, 00:14:43.848 "reset": true, 00:14:43.848 "nvme_admin": false, 00:14:43.848 "nvme_io": false, 00:14:43.848 "nvme_io_md": false, 00:14:43.848 "write_zeroes": true, 00:14:43.848 "zcopy": true, 00:14:43.848 "get_zone_info": false, 00:14:43.848 "zone_management": false, 00:14:43.848 "zone_append": false, 00:14:43.848 "compare": false, 00:14:43.848 "compare_and_write": false, 00:14:43.848 "abort": true, 00:14:43.848 "seek_hole": false, 00:14:43.848 "seek_data": false, 00:14:43.848 "copy": true, 00:14:43.848 "nvme_iov_md": false 00:14:43.848 }, 00:14:43.848 "memory_domains": [ 00:14:43.848 { 00:14:43.848 "dma_device_id": "system", 00:14:43.848 "dma_device_type": 1 00:14:43.848 }, 00:14:43.848 { 00:14:43.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.848 "dma_device_type": 2 00:14:43.848 } 00:14:43.848 ], 00:14:43.848 "driver_specific": {} 00:14:43.848 } 00:14:43.848 ] 00:14:43.848 04:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.848 04:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:43.848 04:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:43.848 04:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:43.848 04:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:43.848 04:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:43.848 04:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:43.848 04:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:43.848 04:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:43.848 04:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:43.848 04:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.848 04:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.848 04:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.848 04:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.848 04:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.848 04:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.848 04:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.848 04:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.848 04:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.107 04:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.107 "name": "Existed_Raid", 00:14:44.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.107 "strip_size_kb": 64, 00:14:44.107 "state": "configuring", 00:14:44.107 "raid_level": "raid5f", 00:14:44.107 "superblock": false, 00:14:44.107 "num_base_bdevs": 4, 00:14:44.107 "num_base_bdevs_discovered": 2, 00:14:44.107 "num_base_bdevs_operational": 4, 00:14:44.107 "base_bdevs_list": [ 00:14:44.107 { 00:14:44.107 "name": "BaseBdev1", 00:14:44.107 "uuid": "0384a1bf-dd7f-46ea-a70d-b0c36fb82872", 00:14:44.107 "is_configured": true, 00:14:44.107 "data_offset": 0, 00:14:44.107 "data_size": 65536 00:14:44.107 }, 00:14:44.107 { 00:14:44.107 "name": "BaseBdev2", 00:14:44.107 "uuid": "03a41592-e129-42f5-9856-beb29e8608ce", 00:14:44.107 "is_configured": true, 00:14:44.107 "data_offset": 0, 00:14:44.107 "data_size": 65536 00:14:44.107 }, 00:14:44.107 { 00:14:44.107 "name": "BaseBdev3", 00:14:44.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.107 "is_configured": false, 00:14:44.107 "data_offset": 0, 00:14:44.107 "data_size": 0 00:14:44.107 }, 00:14:44.107 { 00:14:44.107 "name": "BaseBdev4", 00:14:44.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.107 "is_configured": false, 00:14:44.107 "data_offset": 0, 00:14:44.107 "data_size": 0 00:14:44.107 } 00:14:44.107 ] 00:14:44.107 }' 00:14:44.107 04:30:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.107 04:30:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.367 04:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:44.367 04:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.367 04:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.367 [2024-12-13 04:30:44.275265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:44.367 BaseBdev3 00:14:44.367 04:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.367 04:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:44.367 04:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:44.367 04:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:44.367 04:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:44.368 04:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:44.368 04:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:44.368 04:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:44.368 04:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.368 04:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.368 04:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.368 04:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:44.368 04:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.368 04:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.368 [ 00:14:44.368 { 00:14:44.368 "name": "BaseBdev3", 00:14:44.368 "aliases": [ 00:14:44.368 "e2f93f2b-ff0c-46ee-9af7-9671450e5cb8" 00:14:44.368 ], 00:14:44.368 "product_name": "Malloc disk", 00:14:44.368 "block_size": 512, 00:14:44.368 "num_blocks": 65536, 00:14:44.368 "uuid": "e2f93f2b-ff0c-46ee-9af7-9671450e5cb8", 00:14:44.368 "assigned_rate_limits": { 00:14:44.368 "rw_ios_per_sec": 0, 00:14:44.368 "rw_mbytes_per_sec": 0, 00:14:44.368 "r_mbytes_per_sec": 0, 00:14:44.368 "w_mbytes_per_sec": 0 00:14:44.368 }, 00:14:44.368 "claimed": true, 00:14:44.368 "claim_type": "exclusive_write", 00:14:44.368 "zoned": false, 00:14:44.368 "supported_io_types": { 00:14:44.368 "read": true, 00:14:44.368 "write": true, 00:14:44.368 "unmap": true, 00:14:44.368 "flush": true, 00:14:44.368 "reset": true, 00:14:44.368 "nvme_admin": false, 00:14:44.368 "nvme_io": false, 00:14:44.368 "nvme_io_md": false, 00:14:44.368 "write_zeroes": true, 00:14:44.368 "zcopy": true, 00:14:44.368 "get_zone_info": false, 00:14:44.368 "zone_management": false, 00:14:44.368 "zone_append": false, 00:14:44.368 "compare": false, 00:14:44.368 "compare_and_write": false, 00:14:44.368 "abort": true, 00:14:44.368 "seek_hole": false, 00:14:44.368 "seek_data": false, 00:14:44.368 "copy": true, 00:14:44.368 "nvme_iov_md": false 00:14:44.368 }, 00:14:44.368 "memory_domains": [ 00:14:44.368 { 00:14:44.368 "dma_device_id": "system", 00:14:44.368 "dma_device_type": 1 00:14:44.368 }, 00:14:44.368 { 00:14:44.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.368 "dma_device_type": 2 00:14:44.368 } 00:14:44.368 ], 00:14:44.368 "driver_specific": {} 00:14:44.368 } 00:14:44.368 ] 00:14:44.368 04:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.368 04:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:44.368 04:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:44.368 04:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:44.368 04:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:44.368 04:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:44.368 04:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:44.368 04:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:44.368 04:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.368 04:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:44.368 04:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.368 04:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.368 04:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.368 04:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.368 04:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.368 04:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.368 04:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.368 04:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.368 04:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.368 04:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.368 "name": "Existed_Raid", 00:14:44.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.368 "strip_size_kb": 64, 00:14:44.368 "state": "configuring", 00:14:44.368 "raid_level": "raid5f", 00:14:44.368 "superblock": false, 00:14:44.368 "num_base_bdevs": 4, 00:14:44.368 "num_base_bdevs_discovered": 3, 00:14:44.368 "num_base_bdevs_operational": 4, 00:14:44.368 "base_bdevs_list": [ 00:14:44.368 { 00:14:44.368 "name": "BaseBdev1", 00:14:44.368 "uuid": "0384a1bf-dd7f-46ea-a70d-b0c36fb82872", 00:14:44.368 "is_configured": true, 00:14:44.368 "data_offset": 0, 00:14:44.368 "data_size": 65536 00:14:44.368 }, 00:14:44.368 { 00:14:44.368 "name": "BaseBdev2", 00:14:44.368 "uuid": "03a41592-e129-42f5-9856-beb29e8608ce", 00:14:44.368 "is_configured": true, 00:14:44.368 "data_offset": 0, 00:14:44.368 "data_size": 65536 00:14:44.368 }, 00:14:44.368 { 00:14:44.368 "name": "BaseBdev3", 00:14:44.368 "uuid": "e2f93f2b-ff0c-46ee-9af7-9671450e5cb8", 00:14:44.368 "is_configured": true, 00:14:44.368 "data_offset": 0, 00:14:44.368 "data_size": 65536 00:14:44.368 }, 00:14:44.368 { 00:14:44.368 "name": "BaseBdev4", 00:14:44.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.368 "is_configured": false, 00:14:44.368 "data_offset": 0, 00:14:44.368 "data_size": 0 00:14:44.368 } 00:14:44.368 ] 00:14:44.368 }' 00:14:44.368 04:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.368 04:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.939 04:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:44.939 04:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.939 04:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.939 [2024-12-13 04:30:44.727307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:44.939 [2024-12-13 04:30:44.727366] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:14:44.939 [2024-12-13 04:30:44.727382] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:44.939 [2024-12-13 04:30:44.727737] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:14:44.939 [2024-12-13 04:30:44.728316] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:14:44.939 [2024-12-13 04:30:44.728333] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:14:44.939 [2024-12-13 04:30:44.728584] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:44.939 BaseBdev4 00:14:44.939 04:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.939 04:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:44.939 04:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:44.939 04:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:44.939 04:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:44.939 04:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:44.939 04:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:44.939 04:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:44.939 04:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.939 04:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.939 04:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.939 04:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:44.939 04:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.939 04:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.939 [ 00:14:44.939 { 00:14:44.939 "name": "BaseBdev4", 00:14:44.939 "aliases": [ 00:14:44.939 "297d09fa-8ca0-490a-9d16-4ac67476714f" 00:14:44.939 ], 00:14:44.939 "product_name": "Malloc disk", 00:14:44.939 "block_size": 512, 00:14:44.939 "num_blocks": 65536, 00:14:44.939 "uuid": "297d09fa-8ca0-490a-9d16-4ac67476714f", 00:14:44.939 "assigned_rate_limits": { 00:14:44.939 "rw_ios_per_sec": 0, 00:14:44.939 "rw_mbytes_per_sec": 0, 00:14:44.939 "r_mbytes_per_sec": 0, 00:14:44.939 "w_mbytes_per_sec": 0 00:14:44.939 }, 00:14:44.939 "claimed": true, 00:14:44.939 "claim_type": "exclusive_write", 00:14:44.939 "zoned": false, 00:14:44.939 "supported_io_types": { 00:14:44.939 "read": true, 00:14:44.939 "write": true, 00:14:44.939 "unmap": true, 00:14:44.939 "flush": true, 00:14:44.939 "reset": true, 00:14:44.939 "nvme_admin": false, 00:14:44.939 "nvme_io": false, 00:14:44.939 "nvme_io_md": false, 00:14:44.939 "write_zeroes": true, 00:14:44.939 "zcopy": true, 00:14:44.939 "get_zone_info": false, 00:14:44.939 "zone_management": false, 00:14:44.939 "zone_append": false, 00:14:44.939 "compare": false, 00:14:44.939 "compare_and_write": false, 00:14:44.939 "abort": true, 00:14:44.939 "seek_hole": false, 00:14:44.939 "seek_data": false, 00:14:44.939 "copy": true, 00:14:44.939 "nvme_iov_md": false 00:14:44.939 }, 00:14:44.939 "memory_domains": [ 00:14:44.939 { 00:14:44.939 "dma_device_id": "system", 00:14:44.939 "dma_device_type": 1 00:14:44.939 }, 00:14:44.939 { 00:14:44.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.939 "dma_device_type": 2 00:14:44.939 } 00:14:44.939 ], 00:14:44.939 "driver_specific": {} 00:14:44.939 } 00:14:44.939 ] 00:14:44.939 04:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.939 04:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:44.939 04:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:44.939 04:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:44.939 04:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:44.939 04:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:44.939 04:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.939 04:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:44.940 04:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.940 04:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:44.940 04:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.940 04:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.940 04:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.940 04:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.940 04:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.940 04:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.940 04:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.940 04:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.940 04:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.940 04:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.940 "name": "Existed_Raid", 00:14:44.940 "uuid": "20323dab-59a8-46fd-901c-d5a08b67de22", 00:14:44.940 "strip_size_kb": 64, 00:14:44.940 "state": "online", 00:14:44.940 "raid_level": "raid5f", 00:14:44.940 "superblock": false, 00:14:44.940 "num_base_bdevs": 4, 00:14:44.940 "num_base_bdevs_discovered": 4, 00:14:44.940 "num_base_bdevs_operational": 4, 00:14:44.940 "base_bdevs_list": [ 00:14:44.940 { 00:14:44.940 "name": "BaseBdev1", 00:14:44.940 "uuid": "0384a1bf-dd7f-46ea-a70d-b0c36fb82872", 00:14:44.940 "is_configured": true, 00:14:44.940 "data_offset": 0, 00:14:44.940 "data_size": 65536 00:14:44.940 }, 00:14:44.940 { 00:14:44.940 "name": "BaseBdev2", 00:14:44.940 "uuid": "03a41592-e129-42f5-9856-beb29e8608ce", 00:14:44.940 "is_configured": true, 00:14:44.940 "data_offset": 0, 00:14:44.940 "data_size": 65536 00:14:44.940 }, 00:14:44.940 { 00:14:44.940 "name": "BaseBdev3", 00:14:44.940 "uuid": "e2f93f2b-ff0c-46ee-9af7-9671450e5cb8", 00:14:44.940 "is_configured": true, 00:14:44.940 "data_offset": 0, 00:14:44.940 "data_size": 65536 00:14:44.940 }, 00:14:44.940 { 00:14:44.940 "name": "BaseBdev4", 00:14:44.940 "uuid": "297d09fa-8ca0-490a-9d16-4ac67476714f", 00:14:44.940 "is_configured": true, 00:14:44.940 "data_offset": 0, 00:14:44.940 "data_size": 65536 00:14:44.940 } 00:14:44.940 ] 00:14:44.940 }' 00:14:44.940 04:30:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.940 04:30:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.200 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:45.200 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:45.200 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:45.200 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:45.200 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:45.200 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:45.200 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:45.200 04:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.200 04:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.200 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:45.200 [2024-12-13 04:30:45.206793] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:45.459 04:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.459 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:45.459 "name": "Existed_Raid", 00:14:45.459 "aliases": [ 00:14:45.459 "20323dab-59a8-46fd-901c-d5a08b67de22" 00:14:45.459 ], 00:14:45.459 "product_name": "Raid Volume", 00:14:45.459 "block_size": 512, 00:14:45.459 "num_blocks": 196608, 00:14:45.459 "uuid": "20323dab-59a8-46fd-901c-d5a08b67de22", 00:14:45.459 "assigned_rate_limits": { 00:14:45.459 "rw_ios_per_sec": 0, 00:14:45.459 "rw_mbytes_per_sec": 0, 00:14:45.459 "r_mbytes_per_sec": 0, 00:14:45.459 "w_mbytes_per_sec": 0 00:14:45.459 }, 00:14:45.459 "claimed": false, 00:14:45.459 "zoned": false, 00:14:45.459 "supported_io_types": { 00:14:45.459 "read": true, 00:14:45.459 "write": true, 00:14:45.459 "unmap": false, 00:14:45.459 "flush": false, 00:14:45.459 "reset": true, 00:14:45.459 "nvme_admin": false, 00:14:45.459 "nvme_io": false, 00:14:45.459 "nvme_io_md": false, 00:14:45.459 "write_zeroes": true, 00:14:45.459 "zcopy": false, 00:14:45.459 "get_zone_info": false, 00:14:45.459 "zone_management": false, 00:14:45.459 "zone_append": false, 00:14:45.459 "compare": false, 00:14:45.459 "compare_and_write": false, 00:14:45.459 "abort": false, 00:14:45.459 "seek_hole": false, 00:14:45.459 "seek_data": false, 00:14:45.459 "copy": false, 00:14:45.459 "nvme_iov_md": false 00:14:45.459 }, 00:14:45.459 "driver_specific": { 00:14:45.459 "raid": { 00:14:45.459 "uuid": "20323dab-59a8-46fd-901c-d5a08b67de22", 00:14:45.459 "strip_size_kb": 64, 00:14:45.459 "state": "online", 00:14:45.459 "raid_level": "raid5f", 00:14:45.459 "superblock": false, 00:14:45.459 "num_base_bdevs": 4, 00:14:45.459 "num_base_bdevs_discovered": 4, 00:14:45.459 "num_base_bdevs_operational": 4, 00:14:45.459 "base_bdevs_list": [ 00:14:45.459 { 00:14:45.459 "name": "BaseBdev1", 00:14:45.459 "uuid": "0384a1bf-dd7f-46ea-a70d-b0c36fb82872", 00:14:45.459 "is_configured": true, 00:14:45.459 "data_offset": 0, 00:14:45.459 "data_size": 65536 00:14:45.459 }, 00:14:45.459 { 00:14:45.459 "name": "BaseBdev2", 00:14:45.459 "uuid": "03a41592-e129-42f5-9856-beb29e8608ce", 00:14:45.459 "is_configured": true, 00:14:45.459 "data_offset": 0, 00:14:45.459 "data_size": 65536 00:14:45.459 }, 00:14:45.459 { 00:14:45.459 "name": "BaseBdev3", 00:14:45.459 "uuid": "e2f93f2b-ff0c-46ee-9af7-9671450e5cb8", 00:14:45.459 "is_configured": true, 00:14:45.459 "data_offset": 0, 00:14:45.459 "data_size": 65536 00:14:45.459 }, 00:14:45.459 { 00:14:45.459 "name": "BaseBdev4", 00:14:45.459 "uuid": "297d09fa-8ca0-490a-9d16-4ac67476714f", 00:14:45.459 "is_configured": true, 00:14:45.459 "data_offset": 0, 00:14:45.459 "data_size": 65536 00:14:45.459 } 00:14:45.459 ] 00:14:45.459 } 00:14:45.459 } 00:14:45.459 }' 00:14:45.459 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:45.459 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:45.459 BaseBdev2 00:14:45.459 BaseBdev3 00:14:45.459 BaseBdev4' 00:14:45.459 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:45.459 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:45.459 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:45.459 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:45.459 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:45.459 04:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.459 04:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.460 04:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.460 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:45.460 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:45.460 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:45.460 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:45.460 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:45.460 04:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.460 04:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.460 04:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.460 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:45.460 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:45.460 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:45.460 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:45.460 04:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.460 04:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.460 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:45.460 04:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.719 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:45.719 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:45.719 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:45.719 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:45.719 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:45.719 04:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.719 04:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.719 04:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.719 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:45.719 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:45.719 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:45.720 04:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.720 04:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.720 [2024-12-13 04:30:45.538096] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:45.720 04:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.720 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:45.720 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:45.720 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:45.720 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:45.720 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:45.720 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:45.720 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:45.720 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:45.720 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:45.720 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.720 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:45.720 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.720 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.720 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.720 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.720 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.720 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.720 04:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.720 04:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.720 04:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.720 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.720 "name": "Existed_Raid", 00:14:45.720 "uuid": "20323dab-59a8-46fd-901c-d5a08b67de22", 00:14:45.720 "strip_size_kb": 64, 00:14:45.720 "state": "online", 00:14:45.720 "raid_level": "raid5f", 00:14:45.720 "superblock": false, 00:14:45.720 "num_base_bdevs": 4, 00:14:45.720 "num_base_bdevs_discovered": 3, 00:14:45.720 "num_base_bdevs_operational": 3, 00:14:45.720 "base_bdevs_list": [ 00:14:45.720 { 00:14:45.720 "name": null, 00:14:45.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.720 "is_configured": false, 00:14:45.720 "data_offset": 0, 00:14:45.720 "data_size": 65536 00:14:45.720 }, 00:14:45.720 { 00:14:45.720 "name": "BaseBdev2", 00:14:45.720 "uuid": "03a41592-e129-42f5-9856-beb29e8608ce", 00:14:45.720 "is_configured": true, 00:14:45.720 "data_offset": 0, 00:14:45.720 "data_size": 65536 00:14:45.720 }, 00:14:45.720 { 00:14:45.720 "name": "BaseBdev3", 00:14:45.720 "uuid": "e2f93f2b-ff0c-46ee-9af7-9671450e5cb8", 00:14:45.720 "is_configured": true, 00:14:45.720 "data_offset": 0, 00:14:45.720 "data_size": 65536 00:14:45.720 }, 00:14:45.720 { 00:14:45.720 "name": "BaseBdev4", 00:14:45.720 "uuid": "297d09fa-8ca0-490a-9d16-4ac67476714f", 00:14:45.720 "is_configured": true, 00:14:45.720 "data_offset": 0, 00:14:45.720 "data_size": 65536 00:14:45.720 } 00:14:45.720 ] 00:14:45.720 }' 00:14:45.720 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.720 04:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.979 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:45.979 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:45.979 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.979 04:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.979 04:30:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:45.979 04:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.239 04:30:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.239 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:46.239 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:46.239 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:46.239 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.239 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.239 [2024-12-13 04:30:46.034166] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:46.239 [2024-12-13 04:30:46.034266] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:46.239 [2024-12-13 04:30:46.054776] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:46.239 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.239 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:46.239 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:46.239 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.239 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:46.239 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.239 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.239 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.239 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:46.239 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:46.239 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:46.239 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.239 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.239 [2024-12-13 04:30:46.114705] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:46.239 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.239 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:46.239 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:46.239 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.239 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:46.239 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.239 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.239 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.239 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:46.239 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:46.239 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:46.240 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.240 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.240 [2024-12-13 04:30:46.194551] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:46.240 [2024-12-13 04:30:46.194601] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:14:46.240 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.240 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:46.240 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:46.240 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.240 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:46.240 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.240 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.240 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.500 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:46.500 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:46.500 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:46.500 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:46.500 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:46.500 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:46.500 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.500 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.500 BaseBdev2 00:14:46.500 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.500 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:46.500 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:46.500 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.501 [ 00:14:46.501 { 00:14:46.501 "name": "BaseBdev2", 00:14:46.501 "aliases": [ 00:14:46.501 "a9459446-fa6a-4f2d-a278-2b22ad50a65e" 00:14:46.501 ], 00:14:46.501 "product_name": "Malloc disk", 00:14:46.501 "block_size": 512, 00:14:46.501 "num_blocks": 65536, 00:14:46.501 "uuid": "a9459446-fa6a-4f2d-a278-2b22ad50a65e", 00:14:46.501 "assigned_rate_limits": { 00:14:46.501 "rw_ios_per_sec": 0, 00:14:46.501 "rw_mbytes_per_sec": 0, 00:14:46.501 "r_mbytes_per_sec": 0, 00:14:46.501 "w_mbytes_per_sec": 0 00:14:46.501 }, 00:14:46.501 "claimed": false, 00:14:46.501 "zoned": false, 00:14:46.501 "supported_io_types": { 00:14:46.501 "read": true, 00:14:46.501 "write": true, 00:14:46.501 "unmap": true, 00:14:46.501 "flush": true, 00:14:46.501 "reset": true, 00:14:46.501 "nvme_admin": false, 00:14:46.501 "nvme_io": false, 00:14:46.501 "nvme_io_md": false, 00:14:46.501 "write_zeroes": true, 00:14:46.501 "zcopy": true, 00:14:46.501 "get_zone_info": false, 00:14:46.501 "zone_management": false, 00:14:46.501 "zone_append": false, 00:14:46.501 "compare": false, 00:14:46.501 "compare_and_write": false, 00:14:46.501 "abort": true, 00:14:46.501 "seek_hole": false, 00:14:46.501 "seek_data": false, 00:14:46.501 "copy": true, 00:14:46.501 "nvme_iov_md": false 00:14:46.501 }, 00:14:46.501 "memory_domains": [ 00:14:46.501 { 00:14:46.501 "dma_device_id": "system", 00:14:46.501 "dma_device_type": 1 00:14:46.501 }, 00:14:46.501 { 00:14:46.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.501 "dma_device_type": 2 00:14:46.501 } 00:14:46.501 ], 00:14:46.501 "driver_specific": {} 00:14:46.501 } 00:14:46.501 ] 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.501 BaseBdev3 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.501 [ 00:14:46.501 { 00:14:46.501 "name": "BaseBdev3", 00:14:46.501 "aliases": [ 00:14:46.501 "79c821ef-3712-4e9d-b3a0-0fc897ec5fe5" 00:14:46.501 ], 00:14:46.501 "product_name": "Malloc disk", 00:14:46.501 "block_size": 512, 00:14:46.501 "num_blocks": 65536, 00:14:46.501 "uuid": "79c821ef-3712-4e9d-b3a0-0fc897ec5fe5", 00:14:46.501 "assigned_rate_limits": { 00:14:46.501 "rw_ios_per_sec": 0, 00:14:46.501 "rw_mbytes_per_sec": 0, 00:14:46.501 "r_mbytes_per_sec": 0, 00:14:46.501 "w_mbytes_per_sec": 0 00:14:46.501 }, 00:14:46.501 "claimed": false, 00:14:46.501 "zoned": false, 00:14:46.501 "supported_io_types": { 00:14:46.501 "read": true, 00:14:46.501 "write": true, 00:14:46.501 "unmap": true, 00:14:46.501 "flush": true, 00:14:46.501 "reset": true, 00:14:46.501 "nvme_admin": false, 00:14:46.501 "nvme_io": false, 00:14:46.501 "nvme_io_md": false, 00:14:46.501 "write_zeroes": true, 00:14:46.501 "zcopy": true, 00:14:46.501 "get_zone_info": false, 00:14:46.501 "zone_management": false, 00:14:46.501 "zone_append": false, 00:14:46.501 "compare": false, 00:14:46.501 "compare_and_write": false, 00:14:46.501 "abort": true, 00:14:46.501 "seek_hole": false, 00:14:46.501 "seek_data": false, 00:14:46.501 "copy": true, 00:14:46.501 "nvme_iov_md": false 00:14:46.501 }, 00:14:46.501 "memory_domains": [ 00:14:46.501 { 00:14:46.501 "dma_device_id": "system", 00:14:46.501 "dma_device_type": 1 00:14:46.501 }, 00:14:46.501 { 00:14:46.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.501 "dma_device_type": 2 00:14:46.501 } 00:14:46.501 ], 00:14:46.501 "driver_specific": {} 00:14:46.501 } 00:14:46.501 ] 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.501 BaseBdev4 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.501 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.501 [ 00:14:46.501 { 00:14:46.501 "name": "BaseBdev4", 00:14:46.501 "aliases": [ 00:14:46.501 "a9b1b224-a7a4-4274-901f-063786e7394e" 00:14:46.501 ], 00:14:46.501 "product_name": "Malloc disk", 00:14:46.501 "block_size": 512, 00:14:46.501 "num_blocks": 65536, 00:14:46.501 "uuid": "a9b1b224-a7a4-4274-901f-063786e7394e", 00:14:46.501 "assigned_rate_limits": { 00:14:46.501 "rw_ios_per_sec": 0, 00:14:46.501 "rw_mbytes_per_sec": 0, 00:14:46.501 "r_mbytes_per_sec": 0, 00:14:46.501 "w_mbytes_per_sec": 0 00:14:46.501 }, 00:14:46.501 "claimed": false, 00:14:46.501 "zoned": false, 00:14:46.501 "supported_io_types": { 00:14:46.501 "read": true, 00:14:46.501 "write": true, 00:14:46.501 "unmap": true, 00:14:46.501 "flush": true, 00:14:46.501 "reset": true, 00:14:46.501 "nvme_admin": false, 00:14:46.501 "nvme_io": false, 00:14:46.501 "nvme_io_md": false, 00:14:46.501 "write_zeroes": true, 00:14:46.501 "zcopy": true, 00:14:46.501 "get_zone_info": false, 00:14:46.501 "zone_management": false, 00:14:46.501 "zone_append": false, 00:14:46.501 "compare": false, 00:14:46.501 "compare_and_write": false, 00:14:46.501 "abort": true, 00:14:46.501 "seek_hole": false, 00:14:46.501 "seek_data": false, 00:14:46.501 "copy": true, 00:14:46.501 "nvme_iov_md": false 00:14:46.501 }, 00:14:46.501 "memory_domains": [ 00:14:46.501 { 00:14:46.501 "dma_device_id": "system", 00:14:46.501 "dma_device_type": 1 00:14:46.501 }, 00:14:46.501 { 00:14:46.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.501 "dma_device_type": 2 00:14:46.501 } 00:14:46.501 ], 00:14:46.501 "driver_specific": {} 00:14:46.501 } 00:14:46.501 ] 00:14:46.502 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.502 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:46.502 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:46.502 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:46.502 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:46.502 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.502 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.502 [2024-12-13 04:30:46.449410] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:46.502 [2024-12-13 04:30:46.449518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:46.502 [2024-12-13 04:30:46.449582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:46.502 [2024-12-13 04:30:46.451720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:46.502 [2024-12-13 04:30:46.451810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:46.502 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.502 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:46.502 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:46.502 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:46.502 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:46.502 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.502 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:46.502 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.502 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.502 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.502 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.502 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.502 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.502 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.502 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.502 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.502 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.502 "name": "Existed_Raid", 00:14:46.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.502 "strip_size_kb": 64, 00:14:46.502 "state": "configuring", 00:14:46.502 "raid_level": "raid5f", 00:14:46.502 "superblock": false, 00:14:46.502 "num_base_bdevs": 4, 00:14:46.502 "num_base_bdevs_discovered": 3, 00:14:46.502 "num_base_bdevs_operational": 4, 00:14:46.502 "base_bdevs_list": [ 00:14:46.502 { 00:14:46.502 "name": "BaseBdev1", 00:14:46.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.502 "is_configured": false, 00:14:46.502 "data_offset": 0, 00:14:46.502 "data_size": 0 00:14:46.502 }, 00:14:46.502 { 00:14:46.502 "name": "BaseBdev2", 00:14:46.502 "uuid": "a9459446-fa6a-4f2d-a278-2b22ad50a65e", 00:14:46.502 "is_configured": true, 00:14:46.502 "data_offset": 0, 00:14:46.502 "data_size": 65536 00:14:46.502 }, 00:14:46.502 { 00:14:46.502 "name": "BaseBdev3", 00:14:46.502 "uuid": "79c821ef-3712-4e9d-b3a0-0fc897ec5fe5", 00:14:46.502 "is_configured": true, 00:14:46.502 "data_offset": 0, 00:14:46.502 "data_size": 65536 00:14:46.502 }, 00:14:46.502 { 00:14:46.502 "name": "BaseBdev4", 00:14:46.502 "uuid": "a9b1b224-a7a4-4274-901f-063786e7394e", 00:14:46.502 "is_configured": true, 00:14:46.502 "data_offset": 0, 00:14:46.502 "data_size": 65536 00:14:46.502 } 00:14:46.502 ] 00:14:46.502 }' 00:14:46.502 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.502 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.072 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:47.072 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.072 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.072 [2024-12-13 04:30:46.892597] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:47.072 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.072 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:47.072 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:47.072 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:47.072 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:47.072 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.072 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:47.072 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.072 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.072 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.072 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.072 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.072 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.072 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.072 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.072 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.072 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.072 "name": "Existed_Raid", 00:14:47.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.072 "strip_size_kb": 64, 00:14:47.072 "state": "configuring", 00:14:47.072 "raid_level": "raid5f", 00:14:47.072 "superblock": false, 00:14:47.072 "num_base_bdevs": 4, 00:14:47.072 "num_base_bdevs_discovered": 2, 00:14:47.072 "num_base_bdevs_operational": 4, 00:14:47.072 "base_bdevs_list": [ 00:14:47.072 { 00:14:47.072 "name": "BaseBdev1", 00:14:47.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.072 "is_configured": false, 00:14:47.072 "data_offset": 0, 00:14:47.072 "data_size": 0 00:14:47.072 }, 00:14:47.072 { 00:14:47.072 "name": null, 00:14:47.072 "uuid": "a9459446-fa6a-4f2d-a278-2b22ad50a65e", 00:14:47.072 "is_configured": false, 00:14:47.072 "data_offset": 0, 00:14:47.072 "data_size": 65536 00:14:47.072 }, 00:14:47.072 { 00:14:47.072 "name": "BaseBdev3", 00:14:47.072 "uuid": "79c821ef-3712-4e9d-b3a0-0fc897ec5fe5", 00:14:47.072 "is_configured": true, 00:14:47.072 "data_offset": 0, 00:14:47.072 "data_size": 65536 00:14:47.072 }, 00:14:47.072 { 00:14:47.072 "name": "BaseBdev4", 00:14:47.072 "uuid": "a9b1b224-a7a4-4274-901f-063786e7394e", 00:14:47.072 "is_configured": true, 00:14:47.072 "data_offset": 0, 00:14:47.072 "data_size": 65536 00:14:47.072 } 00:14:47.072 ] 00:14:47.072 }' 00:14:47.072 04:30:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.072 04:30:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.341 04:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.342 04:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.342 04:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.342 04:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:47.342 04:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.620 04:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:47.620 04:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:47.620 04:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.620 04:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.620 [2024-12-13 04:30:47.393113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:47.620 BaseBdev1 00:14:47.620 04:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.620 04:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:47.620 04:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:47.620 04:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:47.620 04:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:47.620 04:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:47.620 04:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:47.620 04:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:47.620 04:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.620 04:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.620 04:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.620 04:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:47.620 04:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.620 04:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.620 [ 00:14:47.620 { 00:14:47.620 "name": "BaseBdev1", 00:14:47.620 "aliases": [ 00:14:47.620 "7142d704-8f25-432f-8299-94e9c907e38f" 00:14:47.620 ], 00:14:47.620 "product_name": "Malloc disk", 00:14:47.620 "block_size": 512, 00:14:47.620 "num_blocks": 65536, 00:14:47.620 "uuid": "7142d704-8f25-432f-8299-94e9c907e38f", 00:14:47.620 "assigned_rate_limits": { 00:14:47.620 "rw_ios_per_sec": 0, 00:14:47.620 "rw_mbytes_per_sec": 0, 00:14:47.620 "r_mbytes_per_sec": 0, 00:14:47.620 "w_mbytes_per_sec": 0 00:14:47.620 }, 00:14:47.620 "claimed": true, 00:14:47.620 "claim_type": "exclusive_write", 00:14:47.620 "zoned": false, 00:14:47.620 "supported_io_types": { 00:14:47.620 "read": true, 00:14:47.620 "write": true, 00:14:47.620 "unmap": true, 00:14:47.620 "flush": true, 00:14:47.620 "reset": true, 00:14:47.620 "nvme_admin": false, 00:14:47.620 "nvme_io": false, 00:14:47.620 "nvme_io_md": false, 00:14:47.620 "write_zeroes": true, 00:14:47.620 "zcopy": true, 00:14:47.620 "get_zone_info": false, 00:14:47.620 "zone_management": false, 00:14:47.620 "zone_append": false, 00:14:47.620 "compare": false, 00:14:47.620 "compare_and_write": false, 00:14:47.620 "abort": true, 00:14:47.620 "seek_hole": false, 00:14:47.620 "seek_data": false, 00:14:47.620 "copy": true, 00:14:47.620 "nvme_iov_md": false 00:14:47.620 }, 00:14:47.620 "memory_domains": [ 00:14:47.620 { 00:14:47.620 "dma_device_id": "system", 00:14:47.620 "dma_device_type": 1 00:14:47.620 }, 00:14:47.620 { 00:14:47.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.620 "dma_device_type": 2 00:14:47.620 } 00:14:47.620 ], 00:14:47.620 "driver_specific": {} 00:14:47.620 } 00:14:47.620 ] 00:14:47.620 04:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.620 04:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:47.620 04:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:47.620 04:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:47.620 04:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:47.620 04:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:47.620 04:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.620 04:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:47.620 04:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.620 04:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.620 04:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.620 04:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.620 04:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.620 04:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.620 04:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.620 04:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.620 04:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.620 04:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.620 "name": "Existed_Raid", 00:14:47.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.621 "strip_size_kb": 64, 00:14:47.621 "state": "configuring", 00:14:47.621 "raid_level": "raid5f", 00:14:47.621 "superblock": false, 00:14:47.621 "num_base_bdevs": 4, 00:14:47.621 "num_base_bdevs_discovered": 3, 00:14:47.621 "num_base_bdevs_operational": 4, 00:14:47.621 "base_bdevs_list": [ 00:14:47.621 { 00:14:47.621 "name": "BaseBdev1", 00:14:47.621 "uuid": "7142d704-8f25-432f-8299-94e9c907e38f", 00:14:47.621 "is_configured": true, 00:14:47.621 "data_offset": 0, 00:14:47.621 "data_size": 65536 00:14:47.621 }, 00:14:47.621 { 00:14:47.621 "name": null, 00:14:47.621 "uuid": "a9459446-fa6a-4f2d-a278-2b22ad50a65e", 00:14:47.621 "is_configured": false, 00:14:47.621 "data_offset": 0, 00:14:47.621 "data_size": 65536 00:14:47.621 }, 00:14:47.621 { 00:14:47.621 "name": "BaseBdev3", 00:14:47.621 "uuid": "79c821ef-3712-4e9d-b3a0-0fc897ec5fe5", 00:14:47.621 "is_configured": true, 00:14:47.621 "data_offset": 0, 00:14:47.621 "data_size": 65536 00:14:47.621 }, 00:14:47.621 { 00:14:47.621 "name": "BaseBdev4", 00:14:47.621 "uuid": "a9b1b224-a7a4-4274-901f-063786e7394e", 00:14:47.621 "is_configured": true, 00:14:47.621 "data_offset": 0, 00:14:47.621 "data_size": 65536 00:14:47.621 } 00:14:47.621 ] 00:14:47.621 }' 00:14:47.621 04:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.621 04:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.896 04:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.896 04:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:47.896 04:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.896 04:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.896 04:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.896 04:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:47.896 04:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:47.896 04:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.896 04:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.896 [2024-12-13 04:30:47.860543] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:47.896 04:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.896 04:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:47.896 04:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:47.896 04:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:47.896 04:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:47.896 04:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.896 04:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:47.896 04:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.896 04:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.896 04:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.896 04:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.896 04:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.896 04:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.896 04:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.896 04:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.896 04:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.156 04:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.156 "name": "Existed_Raid", 00:14:48.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.156 "strip_size_kb": 64, 00:14:48.156 "state": "configuring", 00:14:48.156 "raid_level": "raid5f", 00:14:48.156 "superblock": false, 00:14:48.156 "num_base_bdevs": 4, 00:14:48.156 "num_base_bdevs_discovered": 2, 00:14:48.156 "num_base_bdevs_operational": 4, 00:14:48.156 "base_bdevs_list": [ 00:14:48.156 { 00:14:48.156 "name": "BaseBdev1", 00:14:48.156 "uuid": "7142d704-8f25-432f-8299-94e9c907e38f", 00:14:48.156 "is_configured": true, 00:14:48.156 "data_offset": 0, 00:14:48.156 "data_size": 65536 00:14:48.156 }, 00:14:48.156 { 00:14:48.156 "name": null, 00:14:48.156 "uuid": "a9459446-fa6a-4f2d-a278-2b22ad50a65e", 00:14:48.156 "is_configured": false, 00:14:48.156 "data_offset": 0, 00:14:48.156 "data_size": 65536 00:14:48.156 }, 00:14:48.156 { 00:14:48.156 "name": null, 00:14:48.156 "uuid": "79c821ef-3712-4e9d-b3a0-0fc897ec5fe5", 00:14:48.156 "is_configured": false, 00:14:48.156 "data_offset": 0, 00:14:48.156 "data_size": 65536 00:14:48.156 }, 00:14:48.156 { 00:14:48.156 "name": "BaseBdev4", 00:14:48.156 "uuid": "a9b1b224-a7a4-4274-901f-063786e7394e", 00:14:48.156 "is_configured": true, 00:14:48.156 "data_offset": 0, 00:14:48.156 "data_size": 65536 00:14:48.156 } 00:14:48.156 ] 00:14:48.156 }' 00:14:48.156 04:30:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.157 04:30:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.417 04:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:48.417 04:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.417 04:30:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.417 04:30:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.417 04:30:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.417 04:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:48.417 04:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:48.417 04:30:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.417 04:30:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.417 [2024-12-13 04:30:48.299937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:48.417 04:30:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.417 04:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:48.417 04:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.417 04:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:48.417 04:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:48.417 04:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.417 04:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:48.417 04:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.417 04:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.417 04:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.417 04:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.417 04:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.417 04:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.417 04:30:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.417 04:30:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.417 04:30:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.417 04:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.417 "name": "Existed_Raid", 00:14:48.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.417 "strip_size_kb": 64, 00:14:48.417 "state": "configuring", 00:14:48.417 "raid_level": "raid5f", 00:14:48.417 "superblock": false, 00:14:48.417 "num_base_bdevs": 4, 00:14:48.417 "num_base_bdevs_discovered": 3, 00:14:48.417 "num_base_bdevs_operational": 4, 00:14:48.417 "base_bdevs_list": [ 00:14:48.417 { 00:14:48.417 "name": "BaseBdev1", 00:14:48.417 "uuid": "7142d704-8f25-432f-8299-94e9c907e38f", 00:14:48.417 "is_configured": true, 00:14:48.417 "data_offset": 0, 00:14:48.417 "data_size": 65536 00:14:48.417 }, 00:14:48.417 { 00:14:48.417 "name": null, 00:14:48.417 "uuid": "a9459446-fa6a-4f2d-a278-2b22ad50a65e", 00:14:48.417 "is_configured": false, 00:14:48.417 "data_offset": 0, 00:14:48.417 "data_size": 65536 00:14:48.417 }, 00:14:48.417 { 00:14:48.417 "name": "BaseBdev3", 00:14:48.417 "uuid": "79c821ef-3712-4e9d-b3a0-0fc897ec5fe5", 00:14:48.417 "is_configured": true, 00:14:48.417 "data_offset": 0, 00:14:48.417 "data_size": 65536 00:14:48.417 }, 00:14:48.417 { 00:14:48.417 "name": "BaseBdev4", 00:14:48.417 "uuid": "a9b1b224-a7a4-4274-901f-063786e7394e", 00:14:48.417 "is_configured": true, 00:14:48.417 "data_offset": 0, 00:14:48.417 "data_size": 65536 00:14:48.417 } 00:14:48.417 ] 00:14:48.417 }' 00:14:48.417 04:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.417 04:30:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.987 04:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.987 04:30:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.987 04:30:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.987 04:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:48.987 04:30:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.987 04:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:48.987 04:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:48.987 04:30:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.987 04:30:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.987 [2024-12-13 04:30:48.807134] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:48.987 04:30:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.987 04:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:48.987 04:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.987 04:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:48.987 04:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:48.987 04:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.987 04:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:48.987 04:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.987 04:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.987 04:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.987 04:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.987 04:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.987 04:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.987 04:30:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.987 04:30:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.987 04:30:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.987 04:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.987 "name": "Existed_Raid", 00:14:48.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.987 "strip_size_kb": 64, 00:14:48.987 "state": "configuring", 00:14:48.987 "raid_level": "raid5f", 00:14:48.987 "superblock": false, 00:14:48.987 "num_base_bdevs": 4, 00:14:48.987 "num_base_bdevs_discovered": 2, 00:14:48.987 "num_base_bdevs_operational": 4, 00:14:48.987 "base_bdevs_list": [ 00:14:48.987 { 00:14:48.987 "name": null, 00:14:48.987 "uuid": "7142d704-8f25-432f-8299-94e9c907e38f", 00:14:48.987 "is_configured": false, 00:14:48.987 "data_offset": 0, 00:14:48.987 "data_size": 65536 00:14:48.987 }, 00:14:48.987 { 00:14:48.987 "name": null, 00:14:48.987 "uuid": "a9459446-fa6a-4f2d-a278-2b22ad50a65e", 00:14:48.987 "is_configured": false, 00:14:48.987 "data_offset": 0, 00:14:48.987 "data_size": 65536 00:14:48.987 }, 00:14:48.987 { 00:14:48.987 "name": "BaseBdev3", 00:14:48.987 "uuid": "79c821ef-3712-4e9d-b3a0-0fc897ec5fe5", 00:14:48.987 "is_configured": true, 00:14:48.987 "data_offset": 0, 00:14:48.987 "data_size": 65536 00:14:48.987 }, 00:14:48.987 { 00:14:48.987 "name": "BaseBdev4", 00:14:48.987 "uuid": "a9b1b224-a7a4-4274-901f-063786e7394e", 00:14:48.987 "is_configured": true, 00:14:48.987 "data_offset": 0, 00:14:48.987 "data_size": 65536 00:14:48.987 } 00:14:48.987 ] 00:14:48.987 }' 00:14:48.987 04:30:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.987 04:30:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.557 04:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.557 04:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:49.557 04:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.557 04:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.557 04:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.557 04:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:49.557 04:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:49.557 04:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.557 04:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.557 [2024-12-13 04:30:49.362038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:49.557 04:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.557 04:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:49.557 04:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.557 04:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:49.557 04:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:49.557 04:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.557 04:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:49.557 04:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.557 04:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.557 04:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.557 04:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.557 04:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.557 04:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.557 04:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.557 04:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.557 04:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.557 04:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.557 "name": "Existed_Raid", 00:14:49.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.557 "strip_size_kb": 64, 00:14:49.557 "state": "configuring", 00:14:49.557 "raid_level": "raid5f", 00:14:49.557 "superblock": false, 00:14:49.557 "num_base_bdevs": 4, 00:14:49.557 "num_base_bdevs_discovered": 3, 00:14:49.557 "num_base_bdevs_operational": 4, 00:14:49.557 "base_bdevs_list": [ 00:14:49.557 { 00:14:49.557 "name": null, 00:14:49.557 "uuid": "7142d704-8f25-432f-8299-94e9c907e38f", 00:14:49.557 "is_configured": false, 00:14:49.557 "data_offset": 0, 00:14:49.557 "data_size": 65536 00:14:49.557 }, 00:14:49.557 { 00:14:49.557 "name": "BaseBdev2", 00:14:49.557 "uuid": "a9459446-fa6a-4f2d-a278-2b22ad50a65e", 00:14:49.557 "is_configured": true, 00:14:49.557 "data_offset": 0, 00:14:49.557 "data_size": 65536 00:14:49.557 }, 00:14:49.557 { 00:14:49.557 "name": "BaseBdev3", 00:14:49.557 "uuid": "79c821ef-3712-4e9d-b3a0-0fc897ec5fe5", 00:14:49.557 "is_configured": true, 00:14:49.557 "data_offset": 0, 00:14:49.557 "data_size": 65536 00:14:49.557 }, 00:14:49.557 { 00:14:49.557 "name": "BaseBdev4", 00:14:49.557 "uuid": "a9b1b224-a7a4-4274-901f-063786e7394e", 00:14:49.557 "is_configured": true, 00:14:49.557 "data_offset": 0, 00:14:49.557 "data_size": 65536 00:14:49.557 } 00:14:49.557 ] 00:14:49.557 }' 00:14:49.557 04:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.557 04:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.817 04:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.817 04:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.817 04:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.817 04:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:49.817 04:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.077 04:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:50.077 04:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.077 04:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.077 04:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.077 04:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:50.077 04:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.077 04:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7142d704-8f25-432f-8299-94e9c907e38f 00:14:50.077 04:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.077 04:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.077 [2024-12-13 04:30:49.926891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:50.077 [2024-12-13 04:30:49.926938] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:14:50.077 [2024-12-13 04:30:49.926946] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:50.077 [2024-12-13 04:30:49.927239] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:14:50.077 [2024-12-13 04:30:49.927748] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:14:50.077 [2024-12-13 04:30:49.927763] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:14:50.077 [2024-12-13 04:30:49.927977] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:50.077 NewBaseBdev 00:14:50.077 04:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.077 04:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:50.077 04:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:50.077 04:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:50.077 04:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:50.077 04:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:50.077 04:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:50.077 04:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:50.077 04:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.077 04:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.077 04:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.077 04:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:50.077 04:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.077 04:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.077 [ 00:14:50.077 { 00:14:50.077 "name": "NewBaseBdev", 00:14:50.077 "aliases": [ 00:14:50.077 "7142d704-8f25-432f-8299-94e9c907e38f" 00:14:50.077 ], 00:14:50.077 "product_name": "Malloc disk", 00:14:50.077 "block_size": 512, 00:14:50.077 "num_blocks": 65536, 00:14:50.077 "uuid": "7142d704-8f25-432f-8299-94e9c907e38f", 00:14:50.077 "assigned_rate_limits": { 00:14:50.077 "rw_ios_per_sec": 0, 00:14:50.077 "rw_mbytes_per_sec": 0, 00:14:50.077 "r_mbytes_per_sec": 0, 00:14:50.077 "w_mbytes_per_sec": 0 00:14:50.077 }, 00:14:50.077 "claimed": true, 00:14:50.077 "claim_type": "exclusive_write", 00:14:50.077 "zoned": false, 00:14:50.077 "supported_io_types": { 00:14:50.077 "read": true, 00:14:50.077 "write": true, 00:14:50.077 "unmap": true, 00:14:50.077 "flush": true, 00:14:50.077 "reset": true, 00:14:50.077 "nvme_admin": false, 00:14:50.077 "nvme_io": false, 00:14:50.078 "nvme_io_md": false, 00:14:50.078 "write_zeroes": true, 00:14:50.078 "zcopy": true, 00:14:50.078 "get_zone_info": false, 00:14:50.078 "zone_management": false, 00:14:50.078 "zone_append": false, 00:14:50.078 "compare": false, 00:14:50.078 "compare_and_write": false, 00:14:50.078 "abort": true, 00:14:50.078 "seek_hole": false, 00:14:50.078 "seek_data": false, 00:14:50.078 "copy": true, 00:14:50.078 "nvme_iov_md": false 00:14:50.078 }, 00:14:50.078 "memory_domains": [ 00:14:50.078 { 00:14:50.078 "dma_device_id": "system", 00:14:50.078 "dma_device_type": 1 00:14:50.078 }, 00:14:50.078 { 00:14:50.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.078 "dma_device_type": 2 00:14:50.078 } 00:14:50.078 ], 00:14:50.078 "driver_specific": {} 00:14:50.078 } 00:14:50.078 ] 00:14:50.078 04:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.078 04:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:50.078 04:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:50.078 04:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:50.078 04:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:50.078 04:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:50.078 04:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.078 04:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:50.078 04:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.078 04:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.078 04:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.078 04:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.078 04:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.078 04:30:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.078 04:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.078 04:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.078 04:30:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.078 04:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.078 "name": "Existed_Raid", 00:14:50.078 "uuid": "ab25310b-ee6c-4254-a357-a71f23dbef68", 00:14:50.078 "strip_size_kb": 64, 00:14:50.078 "state": "online", 00:14:50.078 "raid_level": "raid5f", 00:14:50.078 "superblock": false, 00:14:50.078 "num_base_bdevs": 4, 00:14:50.078 "num_base_bdevs_discovered": 4, 00:14:50.078 "num_base_bdevs_operational": 4, 00:14:50.078 "base_bdevs_list": [ 00:14:50.078 { 00:14:50.078 "name": "NewBaseBdev", 00:14:50.078 "uuid": "7142d704-8f25-432f-8299-94e9c907e38f", 00:14:50.078 "is_configured": true, 00:14:50.078 "data_offset": 0, 00:14:50.078 "data_size": 65536 00:14:50.078 }, 00:14:50.078 { 00:14:50.078 "name": "BaseBdev2", 00:14:50.078 "uuid": "a9459446-fa6a-4f2d-a278-2b22ad50a65e", 00:14:50.078 "is_configured": true, 00:14:50.078 "data_offset": 0, 00:14:50.078 "data_size": 65536 00:14:50.078 }, 00:14:50.078 { 00:14:50.078 "name": "BaseBdev3", 00:14:50.078 "uuid": "79c821ef-3712-4e9d-b3a0-0fc897ec5fe5", 00:14:50.078 "is_configured": true, 00:14:50.078 "data_offset": 0, 00:14:50.078 "data_size": 65536 00:14:50.078 }, 00:14:50.078 { 00:14:50.078 "name": "BaseBdev4", 00:14:50.078 "uuid": "a9b1b224-a7a4-4274-901f-063786e7394e", 00:14:50.078 "is_configured": true, 00:14:50.078 "data_offset": 0, 00:14:50.078 "data_size": 65536 00:14:50.078 } 00:14:50.078 ] 00:14:50.078 }' 00:14:50.078 04:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.078 04:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.648 04:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:50.648 04:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:50.648 04:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:50.648 04:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:50.648 04:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:50.648 04:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:50.648 04:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:50.648 04:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.648 04:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.648 04:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:50.648 [2024-12-13 04:30:50.390280] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:50.648 04:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.648 04:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:50.648 "name": "Existed_Raid", 00:14:50.648 "aliases": [ 00:14:50.648 "ab25310b-ee6c-4254-a357-a71f23dbef68" 00:14:50.648 ], 00:14:50.648 "product_name": "Raid Volume", 00:14:50.648 "block_size": 512, 00:14:50.648 "num_blocks": 196608, 00:14:50.648 "uuid": "ab25310b-ee6c-4254-a357-a71f23dbef68", 00:14:50.648 "assigned_rate_limits": { 00:14:50.648 "rw_ios_per_sec": 0, 00:14:50.648 "rw_mbytes_per_sec": 0, 00:14:50.648 "r_mbytes_per_sec": 0, 00:14:50.648 "w_mbytes_per_sec": 0 00:14:50.648 }, 00:14:50.648 "claimed": false, 00:14:50.648 "zoned": false, 00:14:50.648 "supported_io_types": { 00:14:50.648 "read": true, 00:14:50.648 "write": true, 00:14:50.648 "unmap": false, 00:14:50.648 "flush": false, 00:14:50.648 "reset": true, 00:14:50.648 "nvme_admin": false, 00:14:50.648 "nvme_io": false, 00:14:50.648 "nvme_io_md": false, 00:14:50.648 "write_zeroes": true, 00:14:50.648 "zcopy": false, 00:14:50.649 "get_zone_info": false, 00:14:50.649 "zone_management": false, 00:14:50.649 "zone_append": false, 00:14:50.649 "compare": false, 00:14:50.649 "compare_and_write": false, 00:14:50.649 "abort": false, 00:14:50.649 "seek_hole": false, 00:14:50.649 "seek_data": false, 00:14:50.649 "copy": false, 00:14:50.649 "nvme_iov_md": false 00:14:50.649 }, 00:14:50.649 "driver_specific": { 00:14:50.649 "raid": { 00:14:50.649 "uuid": "ab25310b-ee6c-4254-a357-a71f23dbef68", 00:14:50.649 "strip_size_kb": 64, 00:14:50.649 "state": "online", 00:14:50.649 "raid_level": "raid5f", 00:14:50.649 "superblock": false, 00:14:50.649 "num_base_bdevs": 4, 00:14:50.649 "num_base_bdevs_discovered": 4, 00:14:50.649 "num_base_bdevs_operational": 4, 00:14:50.649 "base_bdevs_list": [ 00:14:50.649 { 00:14:50.649 "name": "NewBaseBdev", 00:14:50.649 "uuid": "7142d704-8f25-432f-8299-94e9c907e38f", 00:14:50.649 "is_configured": true, 00:14:50.649 "data_offset": 0, 00:14:50.649 "data_size": 65536 00:14:50.649 }, 00:14:50.649 { 00:14:50.649 "name": "BaseBdev2", 00:14:50.649 "uuid": "a9459446-fa6a-4f2d-a278-2b22ad50a65e", 00:14:50.649 "is_configured": true, 00:14:50.649 "data_offset": 0, 00:14:50.649 "data_size": 65536 00:14:50.649 }, 00:14:50.649 { 00:14:50.649 "name": "BaseBdev3", 00:14:50.649 "uuid": "79c821ef-3712-4e9d-b3a0-0fc897ec5fe5", 00:14:50.649 "is_configured": true, 00:14:50.649 "data_offset": 0, 00:14:50.649 "data_size": 65536 00:14:50.649 }, 00:14:50.649 { 00:14:50.649 "name": "BaseBdev4", 00:14:50.649 "uuid": "a9b1b224-a7a4-4274-901f-063786e7394e", 00:14:50.649 "is_configured": true, 00:14:50.649 "data_offset": 0, 00:14:50.649 "data_size": 65536 00:14:50.649 } 00:14:50.649 ] 00:14:50.649 } 00:14:50.649 } 00:14:50.649 }' 00:14:50.649 04:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:50.649 04:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:50.649 BaseBdev2 00:14:50.649 BaseBdev3 00:14:50.649 BaseBdev4' 00:14:50.649 04:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.649 04:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:50.649 04:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:50.649 04:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:50.649 04:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.649 04:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.649 04:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.649 04:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.649 04:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:50.649 04:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:50.649 04:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:50.649 04:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.649 04:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:50.649 04:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.649 04:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.649 04:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.649 04:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:50.649 04:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:50.649 04:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:50.649 04:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:50.649 04:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.649 04:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.649 04:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.649 04:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.649 04:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:50.649 04:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:50.649 04:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:50.649 04:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.649 04:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:50.649 04:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.649 04:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.649 04:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.909 04:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:50.909 04:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:50.909 04:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:50.909 04:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.909 04:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.909 [2024-12-13 04:30:50.693607] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:50.909 [2024-12-13 04:30:50.693632] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:50.909 [2024-12-13 04:30:50.693704] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:50.909 [2024-12-13 04:30:50.693973] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:50.909 [2024-12-13 04:30:50.693983] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:14:50.909 04:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.909 04:30:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 94995 00:14:50.909 04:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 94995 ']' 00:14:50.909 04:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 94995 00:14:50.909 04:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:50.909 04:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:50.909 04:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94995 00:14:50.909 04:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:50.909 04:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:50.909 killing process with pid 94995 00:14:50.909 04:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94995' 00:14:50.909 04:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 94995 00:14:50.909 [2024-12-13 04:30:50.744100] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:50.909 04:30:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 94995 00:14:50.909 [2024-12-13 04:30:50.820483] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:51.168 04:30:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:51.168 00:14:51.168 real 0m9.761s 00:14:51.168 user 0m16.318s 00:14:51.168 sys 0m2.229s 00:14:51.169 ************************************ 00:14:51.169 END TEST raid5f_state_function_test 00:14:51.169 ************************************ 00:14:51.169 04:30:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:51.169 04:30:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.428 04:30:51 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:14:51.429 04:30:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:51.429 04:30:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:51.429 04:30:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:51.429 ************************************ 00:14:51.429 START TEST raid5f_state_function_test_sb 00:14:51.429 ************************************ 00:14:51.429 04:30:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:14:51.429 04:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:51.429 04:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:51.429 04:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:51.429 04:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:51.429 04:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:51.429 04:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:51.429 04:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:51.429 04:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:51.429 04:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:51.429 04:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:51.429 04:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:51.429 04:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:51.429 04:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:51.429 04:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:51.429 04:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:51.429 04:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:51.429 04:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:51.429 04:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:51.429 04:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:51.429 04:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:51.429 04:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:51.429 04:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:51.429 04:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:51.429 04:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:51.429 04:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:51.429 04:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:51.429 04:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:51.429 04:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:51.429 04:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:51.429 04:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=95649 00:14:51.429 Process raid pid: 95649 00:14:51.429 04:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:51.429 04:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 95649' 00:14:51.429 04:30:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 95649 00:14:51.429 04:30:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 95649 ']' 00:14:51.429 04:30:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.429 04:30:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:51.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.429 04:30:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.429 04:30:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:51.429 04:30:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.429 [2024-12-13 04:30:51.341945] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:14:51.429 [2024-12-13 04:30:51.342090] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:51.689 [2024-12-13 04:30:51.495954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.689 [2024-12-13 04:30:51.534789] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.689 [2024-12-13 04:30:51.611913] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:51.689 [2024-12-13 04:30:51.612043] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:52.258 04:30:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:52.258 04:30:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:52.258 04:30:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:52.258 04:30:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.258 04:30:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.258 [2024-12-13 04:30:52.163015] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:52.258 [2024-12-13 04:30:52.163085] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:52.258 [2024-12-13 04:30:52.163096] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:52.258 [2024-12-13 04:30:52.163106] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:52.258 [2024-12-13 04:30:52.163112] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:52.258 [2024-12-13 04:30:52.163124] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:52.258 [2024-12-13 04:30:52.163129] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:52.258 [2024-12-13 04:30:52.163138] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:52.258 04:30:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.258 04:30:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:52.258 04:30:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.258 04:30:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:52.258 04:30:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:52.258 04:30:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.258 04:30:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:52.258 04:30:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.258 04:30:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.258 04:30:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.258 04:30:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.258 04:30:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.258 04:30:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.258 04:30:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.259 04:30:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.259 04:30:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.259 04:30:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.259 "name": "Existed_Raid", 00:14:52.259 "uuid": "7ef68209-5f9e-430d-8330-9a8446febb8a", 00:14:52.259 "strip_size_kb": 64, 00:14:52.259 "state": "configuring", 00:14:52.259 "raid_level": "raid5f", 00:14:52.259 "superblock": true, 00:14:52.259 "num_base_bdevs": 4, 00:14:52.259 "num_base_bdevs_discovered": 0, 00:14:52.259 "num_base_bdevs_operational": 4, 00:14:52.259 "base_bdevs_list": [ 00:14:52.259 { 00:14:52.259 "name": "BaseBdev1", 00:14:52.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.259 "is_configured": false, 00:14:52.259 "data_offset": 0, 00:14:52.259 "data_size": 0 00:14:52.259 }, 00:14:52.259 { 00:14:52.259 "name": "BaseBdev2", 00:14:52.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.259 "is_configured": false, 00:14:52.259 "data_offset": 0, 00:14:52.259 "data_size": 0 00:14:52.259 }, 00:14:52.259 { 00:14:52.259 "name": "BaseBdev3", 00:14:52.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.259 "is_configured": false, 00:14:52.259 "data_offset": 0, 00:14:52.259 "data_size": 0 00:14:52.259 }, 00:14:52.259 { 00:14:52.259 "name": "BaseBdev4", 00:14:52.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.259 "is_configured": false, 00:14:52.259 "data_offset": 0, 00:14:52.259 "data_size": 0 00:14:52.259 } 00:14:52.259 ] 00:14:52.259 }' 00:14:52.259 04:30:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.259 04:30:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.829 04:30:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:52.829 04:30:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.829 04:30:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.829 [2024-12-13 04:30:52.638046] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:52.829 [2024-12-13 04:30:52.638132] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:14:52.829 04:30:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.829 04:30:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:52.829 04:30:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.829 04:30:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.829 [2024-12-13 04:30:52.650064] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:52.829 [2024-12-13 04:30:52.650151] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:52.829 [2024-12-13 04:30:52.650178] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:52.829 [2024-12-13 04:30:52.650200] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:52.829 [2024-12-13 04:30:52.650217] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:52.829 [2024-12-13 04:30:52.650237] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:52.829 [2024-12-13 04:30:52.650254] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:52.829 [2024-12-13 04:30:52.650275] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:52.829 04:30:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.829 04:30:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:52.829 04:30:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.829 04:30:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.829 [2024-12-13 04:30:52.677128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:52.829 BaseBdev1 00:14:52.829 04:30:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.829 04:30:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:52.829 04:30:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:52.829 04:30:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:52.829 04:30:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:52.829 04:30:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:52.829 04:30:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:52.829 04:30:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:52.829 04:30:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.829 04:30:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.829 04:30:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.829 04:30:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:52.830 04:30:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.830 04:30:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.830 [ 00:14:52.830 { 00:14:52.830 "name": "BaseBdev1", 00:14:52.830 "aliases": [ 00:14:52.830 "f9b5ba5f-4ab1-4b4d-8c13-d657c525cd0e" 00:14:52.830 ], 00:14:52.830 "product_name": "Malloc disk", 00:14:52.830 "block_size": 512, 00:14:52.830 "num_blocks": 65536, 00:14:52.830 "uuid": "f9b5ba5f-4ab1-4b4d-8c13-d657c525cd0e", 00:14:52.830 "assigned_rate_limits": { 00:14:52.830 "rw_ios_per_sec": 0, 00:14:52.830 "rw_mbytes_per_sec": 0, 00:14:52.830 "r_mbytes_per_sec": 0, 00:14:52.830 "w_mbytes_per_sec": 0 00:14:52.830 }, 00:14:52.830 "claimed": true, 00:14:52.830 "claim_type": "exclusive_write", 00:14:52.830 "zoned": false, 00:14:52.830 "supported_io_types": { 00:14:52.830 "read": true, 00:14:52.830 "write": true, 00:14:52.830 "unmap": true, 00:14:52.830 "flush": true, 00:14:52.830 "reset": true, 00:14:52.830 "nvme_admin": false, 00:14:52.830 "nvme_io": false, 00:14:52.830 "nvme_io_md": false, 00:14:52.830 "write_zeroes": true, 00:14:52.830 "zcopy": true, 00:14:52.830 "get_zone_info": false, 00:14:52.830 "zone_management": false, 00:14:52.830 "zone_append": false, 00:14:52.830 "compare": false, 00:14:52.830 "compare_and_write": false, 00:14:52.830 "abort": true, 00:14:52.830 "seek_hole": false, 00:14:52.830 "seek_data": false, 00:14:52.830 "copy": true, 00:14:52.830 "nvme_iov_md": false 00:14:52.830 }, 00:14:52.830 "memory_domains": [ 00:14:52.830 { 00:14:52.830 "dma_device_id": "system", 00:14:52.830 "dma_device_type": 1 00:14:52.830 }, 00:14:52.830 { 00:14:52.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.830 "dma_device_type": 2 00:14:52.830 } 00:14:52.830 ], 00:14:52.830 "driver_specific": {} 00:14:52.830 } 00:14:52.830 ] 00:14:52.830 04:30:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.830 04:30:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:52.830 04:30:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:52.830 04:30:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.830 04:30:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:52.830 04:30:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:52.830 04:30:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.830 04:30:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:52.830 04:30:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.830 04:30:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.830 04:30:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.830 04:30:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.830 04:30:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.830 04:30:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.830 04:30:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.830 04:30:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.830 04:30:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.830 04:30:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.830 "name": "Existed_Raid", 00:14:52.830 "uuid": "4489d8e7-a7d3-49ad-ba5e-86950bbe1f68", 00:14:52.830 "strip_size_kb": 64, 00:14:52.830 "state": "configuring", 00:14:52.830 "raid_level": "raid5f", 00:14:52.830 "superblock": true, 00:14:52.830 "num_base_bdevs": 4, 00:14:52.830 "num_base_bdevs_discovered": 1, 00:14:52.830 "num_base_bdevs_operational": 4, 00:14:52.830 "base_bdevs_list": [ 00:14:52.830 { 00:14:52.830 "name": "BaseBdev1", 00:14:52.830 "uuid": "f9b5ba5f-4ab1-4b4d-8c13-d657c525cd0e", 00:14:52.830 "is_configured": true, 00:14:52.830 "data_offset": 2048, 00:14:52.830 "data_size": 63488 00:14:52.830 }, 00:14:52.830 { 00:14:52.830 "name": "BaseBdev2", 00:14:52.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.830 "is_configured": false, 00:14:52.830 "data_offset": 0, 00:14:52.830 "data_size": 0 00:14:52.830 }, 00:14:52.830 { 00:14:52.830 "name": "BaseBdev3", 00:14:52.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.830 "is_configured": false, 00:14:52.830 "data_offset": 0, 00:14:52.830 "data_size": 0 00:14:52.830 }, 00:14:52.830 { 00:14:52.830 "name": "BaseBdev4", 00:14:52.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.830 "is_configured": false, 00:14:52.830 "data_offset": 0, 00:14:52.830 "data_size": 0 00:14:52.830 } 00:14:52.830 ] 00:14:52.830 }' 00:14:52.830 04:30:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.830 04:30:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.400 04:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:53.400 04:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.400 04:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.400 [2024-12-13 04:30:53.152558] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:53.401 [2024-12-13 04:30:53.152651] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:14:53.401 04:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.401 04:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:53.401 04:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.401 04:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.401 [2024-12-13 04:30:53.164607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:53.401 [2024-12-13 04:30:53.166718] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:53.401 [2024-12-13 04:30:53.166759] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:53.401 [2024-12-13 04:30:53.166768] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:53.401 [2024-12-13 04:30:53.166777] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:53.401 [2024-12-13 04:30:53.166783] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:53.401 [2024-12-13 04:30:53.166791] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:53.401 04:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.401 04:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:53.401 04:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:53.401 04:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:53.401 04:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:53.401 04:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:53.401 04:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:53.401 04:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:53.401 04:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:53.401 04:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.401 04:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.401 04:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.401 04:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.401 04:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.401 04:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.401 04:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.401 04:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.401 04:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.401 04:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.401 "name": "Existed_Raid", 00:14:53.401 "uuid": "ad0ba7b9-6def-473b-b54a-a7fc821b9dbc", 00:14:53.401 "strip_size_kb": 64, 00:14:53.401 "state": "configuring", 00:14:53.401 "raid_level": "raid5f", 00:14:53.401 "superblock": true, 00:14:53.401 "num_base_bdevs": 4, 00:14:53.401 "num_base_bdevs_discovered": 1, 00:14:53.401 "num_base_bdevs_operational": 4, 00:14:53.401 "base_bdevs_list": [ 00:14:53.401 { 00:14:53.401 "name": "BaseBdev1", 00:14:53.401 "uuid": "f9b5ba5f-4ab1-4b4d-8c13-d657c525cd0e", 00:14:53.401 "is_configured": true, 00:14:53.401 "data_offset": 2048, 00:14:53.401 "data_size": 63488 00:14:53.401 }, 00:14:53.401 { 00:14:53.401 "name": "BaseBdev2", 00:14:53.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.401 "is_configured": false, 00:14:53.401 "data_offset": 0, 00:14:53.401 "data_size": 0 00:14:53.401 }, 00:14:53.401 { 00:14:53.401 "name": "BaseBdev3", 00:14:53.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.401 "is_configured": false, 00:14:53.401 "data_offset": 0, 00:14:53.401 "data_size": 0 00:14:53.401 }, 00:14:53.401 { 00:14:53.401 "name": "BaseBdev4", 00:14:53.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.401 "is_configured": false, 00:14:53.401 "data_offset": 0, 00:14:53.401 "data_size": 0 00:14:53.401 } 00:14:53.401 ] 00:14:53.401 }' 00:14:53.401 04:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.401 04:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.661 04:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:53.661 04:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.661 04:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.661 [2024-12-13 04:30:53.593490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:53.661 BaseBdev2 00:14:53.661 04:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.661 04:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:53.661 04:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:53.661 04:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:53.661 04:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:53.661 04:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:53.661 04:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:53.661 04:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:53.661 04:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.661 04:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.661 04:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.661 04:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:53.661 04:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.661 04:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.661 [ 00:14:53.661 { 00:14:53.661 "name": "BaseBdev2", 00:14:53.661 "aliases": [ 00:14:53.661 "6dc58883-3a87-48b5-a04e-9d91687d0e85" 00:14:53.661 ], 00:14:53.661 "product_name": "Malloc disk", 00:14:53.661 "block_size": 512, 00:14:53.661 "num_blocks": 65536, 00:14:53.661 "uuid": "6dc58883-3a87-48b5-a04e-9d91687d0e85", 00:14:53.661 "assigned_rate_limits": { 00:14:53.661 "rw_ios_per_sec": 0, 00:14:53.661 "rw_mbytes_per_sec": 0, 00:14:53.661 "r_mbytes_per_sec": 0, 00:14:53.661 "w_mbytes_per_sec": 0 00:14:53.661 }, 00:14:53.661 "claimed": true, 00:14:53.661 "claim_type": "exclusive_write", 00:14:53.661 "zoned": false, 00:14:53.661 "supported_io_types": { 00:14:53.661 "read": true, 00:14:53.661 "write": true, 00:14:53.661 "unmap": true, 00:14:53.661 "flush": true, 00:14:53.661 "reset": true, 00:14:53.661 "nvme_admin": false, 00:14:53.661 "nvme_io": false, 00:14:53.661 "nvme_io_md": false, 00:14:53.661 "write_zeroes": true, 00:14:53.661 "zcopy": true, 00:14:53.661 "get_zone_info": false, 00:14:53.661 "zone_management": false, 00:14:53.661 "zone_append": false, 00:14:53.661 "compare": false, 00:14:53.661 "compare_and_write": false, 00:14:53.661 "abort": true, 00:14:53.661 "seek_hole": false, 00:14:53.661 "seek_data": false, 00:14:53.661 "copy": true, 00:14:53.661 "nvme_iov_md": false 00:14:53.661 }, 00:14:53.661 "memory_domains": [ 00:14:53.661 { 00:14:53.661 "dma_device_id": "system", 00:14:53.661 "dma_device_type": 1 00:14:53.661 }, 00:14:53.661 { 00:14:53.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.661 "dma_device_type": 2 00:14:53.661 } 00:14:53.661 ], 00:14:53.661 "driver_specific": {} 00:14:53.661 } 00:14:53.661 ] 00:14:53.661 04:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.661 04:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:53.661 04:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:53.661 04:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:53.661 04:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:53.661 04:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:53.661 04:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:53.662 04:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:53.662 04:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:53.662 04:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:53.662 04:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.662 04:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.662 04:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.662 04:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.662 04:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.662 04:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.662 04:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.662 04:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.662 04:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.922 04:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.922 "name": "Existed_Raid", 00:14:53.922 "uuid": "ad0ba7b9-6def-473b-b54a-a7fc821b9dbc", 00:14:53.922 "strip_size_kb": 64, 00:14:53.922 "state": "configuring", 00:14:53.922 "raid_level": "raid5f", 00:14:53.922 "superblock": true, 00:14:53.922 "num_base_bdevs": 4, 00:14:53.922 "num_base_bdevs_discovered": 2, 00:14:53.922 "num_base_bdevs_operational": 4, 00:14:53.922 "base_bdevs_list": [ 00:14:53.922 { 00:14:53.922 "name": "BaseBdev1", 00:14:53.922 "uuid": "f9b5ba5f-4ab1-4b4d-8c13-d657c525cd0e", 00:14:53.922 "is_configured": true, 00:14:53.922 "data_offset": 2048, 00:14:53.922 "data_size": 63488 00:14:53.922 }, 00:14:53.922 { 00:14:53.922 "name": "BaseBdev2", 00:14:53.922 "uuid": "6dc58883-3a87-48b5-a04e-9d91687d0e85", 00:14:53.922 "is_configured": true, 00:14:53.922 "data_offset": 2048, 00:14:53.922 "data_size": 63488 00:14:53.922 }, 00:14:53.922 { 00:14:53.922 "name": "BaseBdev3", 00:14:53.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.922 "is_configured": false, 00:14:53.922 "data_offset": 0, 00:14:53.922 "data_size": 0 00:14:53.922 }, 00:14:53.922 { 00:14:53.922 "name": "BaseBdev4", 00:14:53.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.922 "is_configured": false, 00:14:53.922 "data_offset": 0, 00:14:53.922 "data_size": 0 00:14:53.922 } 00:14:53.922 ] 00:14:53.922 }' 00:14:53.922 04:30:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.922 04:30:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.182 04:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:54.182 04:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.182 04:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.182 [2024-12-13 04:30:54.105036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:54.182 BaseBdev3 00:14:54.182 04:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.182 04:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:54.182 04:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:54.182 04:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:54.182 04:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:54.182 04:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:54.182 04:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:54.182 04:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:54.182 04:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.182 04:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.182 04:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.182 04:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:54.182 04:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.182 04:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.182 [ 00:14:54.182 { 00:14:54.182 "name": "BaseBdev3", 00:14:54.182 "aliases": [ 00:14:54.182 "9d1d7744-262a-443e-8a8a-2fe02ca3321c" 00:14:54.182 ], 00:14:54.182 "product_name": "Malloc disk", 00:14:54.182 "block_size": 512, 00:14:54.182 "num_blocks": 65536, 00:14:54.182 "uuid": "9d1d7744-262a-443e-8a8a-2fe02ca3321c", 00:14:54.182 "assigned_rate_limits": { 00:14:54.182 "rw_ios_per_sec": 0, 00:14:54.182 "rw_mbytes_per_sec": 0, 00:14:54.182 "r_mbytes_per_sec": 0, 00:14:54.182 "w_mbytes_per_sec": 0 00:14:54.182 }, 00:14:54.182 "claimed": true, 00:14:54.182 "claim_type": "exclusive_write", 00:14:54.182 "zoned": false, 00:14:54.182 "supported_io_types": { 00:14:54.182 "read": true, 00:14:54.182 "write": true, 00:14:54.182 "unmap": true, 00:14:54.182 "flush": true, 00:14:54.182 "reset": true, 00:14:54.182 "nvme_admin": false, 00:14:54.182 "nvme_io": false, 00:14:54.182 "nvme_io_md": false, 00:14:54.182 "write_zeroes": true, 00:14:54.182 "zcopy": true, 00:14:54.182 "get_zone_info": false, 00:14:54.182 "zone_management": false, 00:14:54.182 "zone_append": false, 00:14:54.182 "compare": false, 00:14:54.182 "compare_and_write": false, 00:14:54.182 "abort": true, 00:14:54.182 "seek_hole": false, 00:14:54.182 "seek_data": false, 00:14:54.182 "copy": true, 00:14:54.182 "nvme_iov_md": false 00:14:54.182 }, 00:14:54.182 "memory_domains": [ 00:14:54.182 { 00:14:54.182 "dma_device_id": "system", 00:14:54.182 "dma_device_type": 1 00:14:54.182 }, 00:14:54.182 { 00:14:54.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.182 "dma_device_type": 2 00:14:54.182 } 00:14:54.182 ], 00:14:54.182 "driver_specific": {} 00:14:54.182 } 00:14:54.182 ] 00:14:54.182 04:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.182 04:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:54.182 04:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:54.182 04:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:54.182 04:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:54.182 04:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.182 04:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:54.182 04:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:54.182 04:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.182 04:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:54.182 04:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.182 04:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.182 04:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.182 04:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.182 04:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.183 04:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.183 04:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.183 04:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.183 04:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.183 04:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.183 "name": "Existed_Raid", 00:14:54.183 "uuid": "ad0ba7b9-6def-473b-b54a-a7fc821b9dbc", 00:14:54.183 "strip_size_kb": 64, 00:14:54.183 "state": "configuring", 00:14:54.183 "raid_level": "raid5f", 00:14:54.183 "superblock": true, 00:14:54.183 "num_base_bdevs": 4, 00:14:54.183 "num_base_bdevs_discovered": 3, 00:14:54.183 "num_base_bdevs_operational": 4, 00:14:54.183 "base_bdevs_list": [ 00:14:54.183 { 00:14:54.183 "name": "BaseBdev1", 00:14:54.183 "uuid": "f9b5ba5f-4ab1-4b4d-8c13-d657c525cd0e", 00:14:54.183 "is_configured": true, 00:14:54.183 "data_offset": 2048, 00:14:54.183 "data_size": 63488 00:14:54.183 }, 00:14:54.183 { 00:14:54.183 "name": "BaseBdev2", 00:14:54.183 "uuid": "6dc58883-3a87-48b5-a04e-9d91687d0e85", 00:14:54.183 "is_configured": true, 00:14:54.183 "data_offset": 2048, 00:14:54.183 "data_size": 63488 00:14:54.183 }, 00:14:54.183 { 00:14:54.183 "name": "BaseBdev3", 00:14:54.183 "uuid": "9d1d7744-262a-443e-8a8a-2fe02ca3321c", 00:14:54.183 "is_configured": true, 00:14:54.183 "data_offset": 2048, 00:14:54.183 "data_size": 63488 00:14:54.183 }, 00:14:54.183 { 00:14:54.183 "name": "BaseBdev4", 00:14:54.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.183 "is_configured": false, 00:14:54.183 "data_offset": 0, 00:14:54.183 "data_size": 0 00:14:54.183 } 00:14:54.183 ] 00:14:54.183 }' 00:14:54.183 04:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.183 04:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.753 04:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:54.753 04:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.753 04:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.753 [2024-12-13 04:30:54.593102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:54.753 [2024-12-13 04:30:54.593420] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:14:54.753 [2024-12-13 04:30:54.593440] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:54.753 [2024-12-13 04:30:54.593825] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:14:54.753 BaseBdev4 00:14:54.753 [2024-12-13 04:30:54.594352] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:14:54.753 [2024-12-13 04:30:54.594367] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:14:54.753 [2024-12-13 04:30:54.594520] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:54.753 04:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.753 04:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:54.753 04:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:54.753 04:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:54.753 04:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:54.753 04:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:54.753 04:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:54.753 04:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:54.754 04:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.754 04:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.754 04:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.754 04:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:54.754 04:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.754 04:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.754 [ 00:14:54.754 { 00:14:54.754 "name": "BaseBdev4", 00:14:54.754 "aliases": [ 00:14:54.754 "9bbf4e52-9286-40d1-bb19-69d6c4651f57" 00:14:54.754 ], 00:14:54.754 "product_name": "Malloc disk", 00:14:54.754 "block_size": 512, 00:14:54.754 "num_blocks": 65536, 00:14:54.754 "uuid": "9bbf4e52-9286-40d1-bb19-69d6c4651f57", 00:14:54.754 "assigned_rate_limits": { 00:14:54.754 "rw_ios_per_sec": 0, 00:14:54.754 "rw_mbytes_per_sec": 0, 00:14:54.754 "r_mbytes_per_sec": 0, 00:14:54.754 "w_mbytes_per_sec": 0 00:14:54.754 }, 00:14:54.754 "claimed": true, 00:14:54.754 "claim_type": "exclusive_write", 00:14:54.754 "zoned": false, 00:14:54.754 "supported_io_types": { 00:14:54.754 "read": true, 00:14:54.754 "write": true, 00:14:54.754 "unmap": true, 00:14:54.754 "flush": true, 00:14:54.754 "reset": true, 00:14:54.754 "nvme_admin": false, 00:14:54.754 "nvme_io": false, 00:14:54.754 "nvme_io_md": false, 00:14:54.754 "write_zeroes": true, 00:14:54.754 "zcopy": true, 00:14:54.754 "get_zone_info": false, 00:14:54.754 "zone_management": false, 00:14:54.754 "zone_append": false, 00:14:54.754 "compare": false, 00:14:54.754 "compare_and_write": false, 00:14:54.754 "abort": true, 00:14:54.754 "seek_hole": false, 00:14:54.754 "seek_data": false, 00:14:54.754 "copy": true, 00:14:54.754 "nvme_iov_md": false 00:14:54.754 }, 00:14:54.754 "memory_domains": [ 00:14:54.754 { 00:14:54.754 "dma_device_id": "system", 00:14:54.754 "dma_device_type": 1 00:14:54.754 }, 00:14:54.754 { 00:14:54.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.754 "dma_device_type": 2 00:14:54.754 } 00:14:54.754 ], 00:14:54.754 "driver_specific": {} 00:14:54.754 } 00:14:54.754 ] 00:14:54.754 04:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.754 04:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:54.754 04:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:54.754 04:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:54.754 04:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:54.754 04:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.754 04:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:54.754 04:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:54.754 04:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.754 04:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:54.754 04:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.754 04:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.754 04:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.754 04:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.754 04:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.754 04:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.754 04:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.754 04:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.754 04:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.754 04:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.754 "name": "Existed_Raid", 00:14:54.754 "uuid": "ad0ba7b9-6def-473b-b54a-a7fc821b9dbc", 00:14:54.754 "strip_size_kb": 64, 00:14:54.754 "state": "online", 00:14:54.754 "raid_level": "raid5f", 00:14:54.754 "superblock": true, 00:14:54.754 "num_base_bdevs": 4, 00:14:54.754 "num_base_bdevs_discovered": 4, 00:14:54.754 "num_base_bdevs_operational": 4, 00:14:54.754 "base_bdevs_list": [ 00:14:54.754 { 00:14:54.754 "name": "BaseBdev1", 00:14:54.754 "uuid": "f9b5ba5f-4ab1-4b4d-8c13-d657c525cd0e", 00:14:54.754 "is_configured": true, 00:14:54.754 "data_offset": 2048, 00:14:54.754 "data_size": 63488 00:14:54.754 }, 00:14:54.754 { 00:14:54.754 "name": "BaseBdev2", 00:14:54.754 "uuid": "6dc58883-3a87-48b5-a04e-9d91687d0e85", 00:14:54.754 "is_configured": true, 00:14:54.754 "data_offset": 2048, 00:14:54.754 "data_size": 63488 00:14:54.754 }, 00:14:54.754 { 00:14:54.754 "name": "BaseBdev3", 00:14:54.754 "uuid": "9d1d7744-262a-443e-8a8a-2fe02ca3321c", 00:14:54.754 "is_configured": true, 00:14:54.754 "data_offset": 2048, 00:14:54.754 "data_size": 63488 00:14:54.754 }, 00:14:54.754 { 00:14:54.754 "name": "BaseBdev4", 00:14:54.754 "uuid": "9bbf4e52-9286-40d1-bb19-69d6c4651f57", 00:14:54.754 "is_configured": true, 00:14:54.754 "data_offset": 2048, 00:14:54.754 "data_size": 63488 00:14:54.754 } 00:14:54.754 ] 00:14:54.754 }' 00:14:54.754 04:30:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.754 04:30:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.322 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:55.322 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:55.322 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:55.322 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:55.322 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:55.322 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:55.322 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:55.322 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:55.322 04:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.322 04:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.322 [2024-12-13 04:30:55.112705] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:55.322 04:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.322 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:55.322 "name": "Existed_Raid", 00:14:55.322 "aliases": [ 00:14:55.322 "ad0ba7b9-6def-473b-b54a-a7fc821b9dbc" 00:14:55.322 ], 00:14:55.322 "product_name": "Raid Volume", 00:14:55.322 "block_size": 512, 00:14:55.322 "num_blocks": 190464, 00:14:55.322 "uuid": "ad0ba7b9-6def-473b-b54a-a7fc821b9dbc", 00:14:55.322 "assigned_rate_limits": { 00:14:55.322 "rw_ios_per_sec": 0, 00:14:55.322 "rw_mbytes_per_sec": 0, 00:14:55.322 "r_mbytes_per_sec": 0, 00:14:55.322 "w_mbytes_per_sec": 0 00:14:55.322 }, 00:14:55.322 "claimed": false, 00:14:55.322 "zoned": false, 00:14:55.322 "supported_io_types": { 00:14:55.322 "read": true, 00:14:55.322 "write": true, 00:14:55.322 "unmap": false, 00:14:55.322 "flush": false, 00:14:55.322 "reset": true, 00:14:55.322 "nvme_admin": false, 00:14:55.322 "nvme_io": false, 00:14:55.322 "nvme_io_md": false, 00:14:55.322 "write_zeroes": true, 00:14:55.322 "zcopy": false, 00:14:55.322 "get_zone_info": false, 00:14:55.322 "zone_management": false, 00:14:55.322 "zone_append": false, 00:14:55.322 "compare": false, 00:14:55.322 "compare_and_write": false, 00:14:55.322 "abort": false, 00:14:55.322 "seek_hole": false, 00:14:55.322 "seek_data": false, 00:14:55.322 "copy": false, 00:14:55.322 "nvme_iov_md": false 00:14:55.322 }, 00:14:55.322 "driver_specific": { 00:14:55.322 "raid": { 00:14:55.322 "uuid": "ad0ba7b9-6def-473b-b54a-a7fc821b9dbc", 00:14:55.322 "strip_size_kb": 64, 00:14:55.322 "state": "online", 00:14:55.322 "raid_level": "raid5f", 00:14:55.322 "superblock": true, 00:14:55.322 "num_base_bdevs": 4, 00:14:55.322 "num_base_bdevs_discovered": 4, 00:14:55.322 "num_base_bdevs_operational": 4, 00:14:55.322 "base_bdevs_list": [ 00:14:55.322 { 00:14:55.322 "name": "BaseBdev1", 00:14:55.323 "uuid": "f9b5ba5f-4ab1-4b4d-8c13-d657c525cd0e", 00:14:55.323 "is_configured": true, 00:14:55.323 "data_offset": 2048, 00:14:55.323 "data_size": 63488 00:14:55.323 }, 00:14:55.323 { 00:14:55.323 "name": "BaseBdev2", 00:14:55.323 "uuid": "6dc58883-3a87-48b5-a04e-9d91687d0e85", 00:14:55.323 "is_configured": true, 00:14:55.323 "data_offset": 2048, 00:14:55.323 "data_size": 63488 00:14:55.323 }, 00:14:55.323 { 00:14:55.323 "name": "BaseBdev3", 00:14:55.323 "uuid": "9d1d7744-262a-443e-8a8a-2fe02ca3321c", 00:14:55.323 "is_configured": true, 00:14:55.323 "data_offset": 2048, 00:14:55.323 "data_size": 63488 00:14:55.323 }, 00:14:55.323 { 00:14:55.323 "name": "BaseBdev4", 00:14:55.323 "uuid": "9bbf4e52-9286-40d1-bb19-69d6c4651f57", 00:14:55.323 "is_configured": true, 00:14:55.323 "data_offset": 2048, 00:14:55.323 "data_size": 63488 00:14:55.323 } 00:14:55.323 ] 00:14:55.323 } 00:14:55.323 } 00:14:55.323 }' 00:14:55.323 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:55.323 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:55.323 BaseBdev2 00:14:55.323 BaseBdev3 00:14:55.323 BaseBdev4' 00:14:55.323 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:55.323 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:55.323 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:55.323 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:55.323 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:55.323 04:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.323 04:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.323 04:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.323 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:55.323 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:55.323 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:55.323 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:55.323 04:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.323 04:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.323 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:55.323 04:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.323 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:55.323 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:55.323 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:55.323 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:55.323 04:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.323 04:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.323 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:55.323 04:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.582 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:55.582 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:55.582 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:55.582 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:55.582 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:55.582 04:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.582 04:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.582 04:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.582 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:55.582 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:55.582 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:55.582 04:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.582 04:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.582 [2024-12-13 04:30:55.420608] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:55.582 04:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.582 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:55.582 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:55.582 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:55.582 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:55.582 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:55.582 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:55.582 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.582 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:55.582 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:55.582 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.582 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:55.582 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.582 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.582 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.582 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.582 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.582 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.582 04:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.582 04:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.582 04:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.582 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.582 "name": "Existed_Raid", 00:14:55.582 "uuid": "ad0ba7b9-6def-473b-b54a-a7fc821b9dbc", 00:14:55.582 "strip_size_kb": 64, 00:14:55.582 "state": "online", 00:14:55.582 "raid_level": "raid5f", 00:14:55.582 "superblock": true, 00:14:55.582 "num_base_bdevs": 4, 00:14:55.582 "num_base_bdevs_discovered": 3, 00:14:55.582 "num_base_bdevs_operational": 3, 00:14:55.582 "base_bdevs_list": [ 00:14:55.582 { 00:14:55.582 "name": null, 00:14:55.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.582 "is_configured": false, 00:14:55.582 "data_offset": 0, 00:14:55.582 "data_size": 63488 00:14:55.582 }, 00:14:55.582 { 00:14:55.582 "name": "BaseBdev2", 00:14:55.583 "uuid": "6dc58883-3a87-48b5-a04e-9d91687d0e85", 00:14:55.583 "is_configured": true, 00:14:55.583 "data_offset": 2048, 00:14:55.583 "data_size": 63488 00:14:55.583 }, 00:14:55.583 { 00:14:55.583 "name": "BaseBdev3", 00:14:55.583 "uuid": "9d1d7744-262a-443e-8a8a-2fe02ca3321c", 00:14:55.583 "is_configured": true, 00:14:55.583 "data_offset": 2048, 00:14:55.583 "data_size": 63488 00:14:55.583 }, 00:14:55.583 { 00:14:55.583 "name": "BaseBdev4", 00:14:55.583 "uuid": "9bbf4e52-9286-40d1-bb19-69d6c4651f57", 00:14:55.583 "is_configured": true, 00:14:55.583 "data_offset": 2048, 00:14:55.583 "data_size": 63488 00:14:55.583 } 00:14:55.583 ] 00:14:55.583 }' 00:14:55.583 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.583 04:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.152 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:56.152 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:56.152 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:56.152 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.152 04:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.152 04:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.152 04:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.152 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:56.152 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:56.152 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:56.152 04:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.152 04:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.152 [2024-12-13 04:30:55.952585] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:56.152 [2024-12-13 04:30:55.952788] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:56.152 [2024-12-13 04:30:55.972757] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:56.152 04:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.152 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:56.152 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:56.152 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:56.152 04:30:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.152 04:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.152 04:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.152 04:30:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.152 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:56.152 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:56.152 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:56.152 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.153 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.153 [2024-12-13 04:30:56.028682] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:56.153 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.153 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:56.153 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:56.153 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.153 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.153 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.153 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:56.153 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.153 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:56.153 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:56.153 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:56.153 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.153 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.153 [2024-12-13 04:30:56.101205] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:56.153 [2024-12-13 04:30:56.101312] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:14:56.153 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.153 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:56.153 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:56.153 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.153 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:56.153 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.153 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.153 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.413 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:56.413 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:56.413 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:56.413 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:56.413 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:56.413 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:56.413 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.413 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.413 BaseBdev2 00:14:56.413 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.413 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:56.413 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:56.413 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:56.413 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:56.413 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:56.413 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:56.413 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:56.413 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.413 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.413 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.413 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:56.413 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.413 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.413 [ 00:14:56.413 { 00:14:56.413 "name": "BaseBdev2", 00:14:56.413 "aliases": [ 00:14:56.413 "b25fc5c9-1b40-46e7-90f1-013e5d9dc5f7" 00:14:56.413 ], 00:14:56.413 "product_name": "Malloc disk", 00:14:56.413 "block_size": 512, 00:14:56.413 "num_blocks": 65536, 00:14:56.413 "uuid": "b25fc5c9-1b40-46e7-90f1-013e5d9dc5f7", 00:14:56.413 "assigned_rate_limits": { 00:14:56.413 "rw_ios_per_sec": 0, 00:14:56.413 "rw_mbytes_per_sec": 0, 00:14:56.413 "r_mbytes_per_sec": 0, 00:14:56.413 "w_mbytes_per_sec": 0 00:14:56.413 }, 00:14:56.413 "claimed": false, 00:14:56.413 "zoned": false, 00:14:56.413 "supported_io_types": { 00:14:56.413 "read": true, 00:14:56.413 "write": true, 00:14:56.413 "unmap": true, 00:14:56.413 "flush": true, 00:14:56.413 "reset": true, 00:14:56.413 "nvme_admin": false, 00:14:56.413 "nvme_io": false, 00:14:56.413 "nvme_io_md": false, 00:14:56.413 "write_zeroes": true, 00:14:56.413 "zcopy": true, 00:14:56.413 "get_zone_info": false, 00:14:56.413 "zone_management": false, 00:14:56.413 "zone_append": false, 00:14:56.413 "compare": false, 00:14:56.413 "compare_and_write": false, 00:14:56.413 "abort": true, 00:14:56.413 "seek_hole": false, 00:14:56.413 "seek_data": false, 00:14:56.413 "copy": true, 00:14:56.413 "nvme_iov_md": false 00:14:56.413 }, 00:14:56.413 "memory_domains": [ 00:14:56.413 { 00:14:56.413 "dma_device_id": "system", 00:14:56.413 "dma_device_type": 1 00:14:56.413 }, 00:14:56.413 { 00:14:56.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.413 "dma_device_type": 2 00:14:56.413 } 00:14:56.413 ], 00:14:56.413 "driver_specific": {} 00:14:56.413 } 00:14:56.413 ] 00:14:56.413 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.413 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:56.413 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:56.413 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:56.413 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:56.413 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.413 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.413 BaseBdev3 00:14:56.413 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.413 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:56.413 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:56.413 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:56.413 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:56.413 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:56.413 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:56.413 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.414 [ 00:14:56.414 { 00:14:56.414 "name": "BaseBdev3", 00:14:56.414 "aliases": [ 00:14:56.414 "4f3248a7-62c1-4444-ac2c-b8bfe6241865" 00:14:56.414 ], 00:14:56.414 "product_name": "Malloc disk", 00:14:56.414 "block_size": 512, 00:14:56.414 "num_blocks": 65536, 00:14:56.414 "uuid": "4f3248a7-62c1-4444-ac2c-b8bfe6241865", 00:14:56.414 "assigned_rate_limits": { 00:14:56.414 "rw_ios_per_sec": 0, 00:14:56.414 "rw_mbytes_per_sec": 0, 00:14:56.414 "r_mbytes_per_sec": 0, 00:14:56.414 "w_mbytes_per_sec": 0 00:14:56.414 }, 00:14:56.414 "claimed": false, 00:14:56.414 "zoned": false, 00:14:56.414 "supported_io_types": { 00:14:56.414 "read": true, 00:14:56.414 "write": true, 00:14:56.414 "unmap": true, 00:14:56.414 "flush": true, 00:14:56.414 "reset": true, 00:14:56.414 "nvme_admin": false, 00:14:56.414 "nvme_io": false, 00:14:56.414 "nvme_io_md": false, 00:14:56.414 "write_zeroes": true, 00:14:56.414 "zcopy": true, 00:14:56.414 "get_zone_info": false, 00:14:56.414 "zone_management": false, 00:14:56.414 "zone_append": false, 00:14:56.414 "compare": false, 00:14:56.414 "compare_and_write": false, 00:14:56.414 "abort": true, 00:14:56.414 "seek_hole": false, 00:14:56.414 "seek_data": false, 00:14:56.414 "copy": true, 00:14:56.414 "nvme_iov_md": false 00:14:56.414 }, 00:14:56.414 "memory_domains": [ 00:14:56.414 { 00:14:56.414 "dma_device_id": "system", 00:14:56.414 "dma_device_type": 1 00:14:56.414 }, 00:14:56.414 { 00:14:56.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.414 "dma_device_type": 2 00:14:56.414 } 00:14:56.414 ], 00:14:56.414 "driver_specific": {} 00:14:56.414 } 00:14:56.414 ] 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.414 BaseBdev4 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.414 [ 00:14:56.414 { 00:14:56.414 "name": "BaseBdev4", 00:14:56.414 "aliases": [ 00:14:56.414 "329420eb-c36f-4f40-b4de-0cb3cff31054" 00:14:56.414 ], 00:14:56.414 "product_name": "Malloc disk", 00:14:56.414 "block_size": 512, 00:14:56.414 "num_blocks": 65536, 00:14:56.414 "uuid": "329420eb-c36f-4f40-b4de-0cb3cff31054", 00:14:56.414 "assigned_rate_limits": { 00:14:56.414 "rw_ios_per_sec": 0, 00:14:56.414 "rw_mbytes_per_sec": 0, 00:14:56.414 "r_mbytes_per_sec": 0, 00:14:56.414 "w_mbytes_per_sec": 0 00:14:56.414 }, 00:14:56.414 "claimed": false, 00:14:56.414 "zoned": false, 00:14:56.414 "supported_io_types": { 00:14:56.414 "read": true, 00:14:56.414 "write": true, 00:14:56.414 "unmap": true, 00:14:56.414 "flush": true, 00:14:56.414 "reset": true, 00:14:56.414 "nvme_admin": false, 00:14:56.414 "nvme_io": false, 00:14:56.414 "nvme_io_md": false, 00:14:56.414 "write_zeroes": true, 00:14:56.414 "zcopy": true, 00:14:56.414 "get_zone_info": false, 00:14:56.414 "zone_management": false, 00:14:56.414 "zone_append": false, 00:14:56.414 "compare": false, 00:14:56.414 "compare_and_write": false, 00:14:56.414 "abort": true, 00:14:56.414 "seek_hole": false, 00:14:56.414 "seek_data": false, 00:14:56.414 "copy": true, 00:14:56.414 "nvme_iov_md": false 00:14:56.414 }, 00:14:56.414 "memory_domains": [ 00:14:56.414 { 00:14:56.414 "dma_device_id": "system", 00:14:56.414 "dma_device_type": 1 00:14:56.414 }, 00:14:56.414 { 00:14:56.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.414 "dma_device_type": 2 00:14:56.414 } 00:14:56.414 ], 00:14:56.414 "driver_specific": {} 00:14:56.414 } 00:14:56.414 ] 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.414 [2024-12-13 04:30:56.349616] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:56.414 [2024-12-13 04:30:56.349711] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:56.414 [2024-12-13 04:30:56.349758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:56.414 [2024-12-13 04:30:56.351872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:56.414 [2024-12-13 04:30:56.351957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.414 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.414 "name": "Existed_Raid", 00:14:56.414 "uuid": "e9c24eb4-7770-4974-bf5d-3eb74fadba90", 00:14:56.414 "strip_size_kb": 64, 00:14:56.414 "state": "configuring", 00:14:56.414 "raid_level": "raid5f", 00:14:56.414 "superblock": true, 00:14:56.414 "num_base_bdevs": 4, 00:14:56.414 "num_base_bdevs_discovered": 3, 00:14:56.414 "num_base_bdevs_operational": 4, 00:14:56.414 "base_bdevs_list": [ 00:14:56.414 { 00:14:56.414 "name": "BaseBdev1", 00:14:56.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.414 "is_configured": false, 00:14:56.414 "data_offset": 0, 00:14:56.414 "data_size": 0 00:14:56.414 }, 00:14:56.414 { 00:14:56.414 "name": "BaseBdev2", 00:14:56.414 "uuid": "b25fc5c9-1b40-46e7-90f1-013e5d9dc5f7", 00:14:56.414 "is_configured": true, 00:14:56.414 "data_offset": 2048, 00:14:56.414 "data_size": 63488 00:14:56.414 }, 00:14:56.414 { 00:14:56.414 "name": "BaseBdev3", 00:14:56.414 "uuid": "4f3248a7-62c1-4444-ac2c-b8bfe6241865", 00:14:56.414 "is_configured": true, 00:14:56.414 "data_offset": 2048, 00:14:56.414 "data_size": 63488 00:14:56.414 }, 00:14:56.414 { 00:14:56.414 "name": "BaseBdev4", 00:14:56.414 "uuid": "329420eb-c36f-4f40-b4de-0cb3cff31054", 00:14:56.414 "is_configured": true, 00:14:56.414 "data_offset": 2048, 00:14:56.414 "data_size": 63488 00:14:56.415 } 00:14:56.415 ] 00:14:56.415 }' 00:14:56.415 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.415 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.984 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:56.984 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.984 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.984 [2024-12-13 04:30:56.776863] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:56.984 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.984 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:56.984 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.984 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.984 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:56.984 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.984 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:56.984 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.984 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.984 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.984 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.984 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.984 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.984 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.984 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.984 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.984 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.984 "name": "Existed_Raid", 00:14:56.984 "uuid": "e9c24eb4-7770-4974-bf5d-3eb74fadba90", 00:14:56.984 "strip_size_kb": 64, 00:14:56.984 "state": "configuring", 00:14:56.984 "raid_level": "raid5f", 00:14:56.984 "superblock": true, 00:14:56.985 "num_base_bdevs": 4, 00:14:56.985 "num_base_bdevs_discovered": 2, 00:14:56.985 "num_base_bdevs_operational": 4, 00:14:56.985 "base_bdevs_list": [ 00:14:56.985 { 00:14:56.985 "name": "BaseBdev1", 00:14:56.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.985 "is_configured": false, 00:14:56.985 "data_offset": 0, 00:14:56.985 "data_size": 0 00:14:56.985 }, 00:14:56.985 { 00:14:56.985 "name": null, 00:14:56.985 "uuid": "b25fc5c9-1b40-46e7-90f1-013e5d9dc5f7", 00:14:56.985 "is_configured": false, 00:14:56.985 "data_offset": 0, 00:14:56.985 "data_size": 63488 00:14:56.985 }, 00:14:56.985 { 00:14:56.985 "name": "BaseBdev3", 00:14:56.985 "uuid": "4f3248a7-62c1-4444-ac2c-b8bfe6241865", 00:14:56.985 "is_configured": true, 00:14:56.985 "data_offset": 2048, 00:14:56.985 "data_size": 63488 00:14:56.985 }, 00:14:56.985 { 00:14:56.985 "name": "BaseBdev4", 00:14:56.985 "uuid": "329420eb-c36f-4f40-b4de-0cb3cff31054", 00:14:56.985 "is_configured": true, 00:14:56.985 "data_offset": 2048, 00:14:56.985 "data_size": 63488 00:14:56.985 } 00:14:56.985 ] 00:14:56.985 }' 00:14:56.985 04:30:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.985 04:30:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.245 04:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.245 04:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:57.245 04:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.245 04:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.505 04:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.505 04:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:57.505 04:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:57.505 04:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.505 04:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.505 [2024-12-13 04:30:57.313011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:57.505 BaseBdev1 00:14:57.505 04:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.505 04:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:57.505 04:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:57.505 04:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:57.505 04:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:57.505 04:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:57.505 04:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:57.505 04:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:57.505 04:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.505 04:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.505 04:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.505 04:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:57.505 04:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.505 04:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.505 [ 00:14:57.505 { 00:14:57.505 "name": "BaseBdev1", 00:14:57.505 "aliases": [ 00:14:57.505 "7d16a270-dbdd-4936-acc9-6f168d905c4e" 00:14:57.505 ], 00:14:57.505 "product_name": "Malloc disk", 00:14:57.505 "block_size": 512, 00:14:57.505 "num_blocks": 65536, 00:14:57.505 "uuid": "7d16a270-dbdd-4936-acc9-6f168d905c4e", 00:14:57.505 "assigned_rate_limits": { 00:14:57.505 "rw_ios_per_sec": 0, 00:14:57.505 "rw_mbytes_per_sec": 0, 00:14:57.505 "r_mbytes_per_sec": 0, 00:14:57.505 "w_mbytes_per_sec": 0 00:14:57.505 }, 00:14:57.505 "claimed": true, 00:14:57.505 "claim_type": "exclusive_write", 00:14:57.505 "zoned": false, 00:14:57.505 "supported_io_types": { 00:14:57.505 "read": true, 00:14:57.505 "write": true, 00:14:57.505 "unmap": true, 00:14:57.505 "flush": true, 00:14:57.505 "reset": true, 00:14:57.505 "nvme_admin": false, 00:14:57.505 "nvme_io": false, 00:14:57.505 "nvme_io_md": false, 00:14:57.505 "write_zeroes": true, 00:14:57.505 "zcopy": true, 00:14:57.505 "get_zone_info": false, 00:14:57.505 "zone_management": false, 00:14:57.505 "zone_append": false, 00:14:57.505 "compare": false, 00:14:57.505 "compare_and_write": false, 00:14:57.505 "abort": true, 00:14:57.505 "seek_hole": false, 00:14:57.505 "seek_data": false, 00:14:57.505 "copy": true, 00:14:57.505 "nvme_iov_md": false 00:14:57.505 }, 00:14:57.505 "memory_domains": [ 00:14:57.505 { 00:14:57.505 "dma_device_id": "system", 00:14:57.505 "dma_device_type": 1 00:14:57.505 }, 00:14:57.505 { 00:14:57.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.505 "dma_device_type": 2 00:14:57.505 } 00:14:57.505 ], 00:14:57.505 "driver_specific": {} 00:14:57.505 } 00:14:57.505 ] 00:14:57.505 04:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.505 04:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:57.505 04:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:57.505 04:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.505 04:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:57.505 04:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:57.505 04:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.505 04:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:57.505 04:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.505 04:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.505 04:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.505 04:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.505 04:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.505 04:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.505 04:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.505 04:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.505 04:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.505 04:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.505 "name": "Existed_Raid", 00:14:57.505 "uuid": "e9c24eb4-7770-4974-bf5d-3eb74fadba90", 00:14:57.505 "strip_size_kb": 64, 00:14:57.505 "state": "configuring", 00:14:57.505 "raid_level": "raid5f", 00:14:57.505 "superblock": true, 00:14:57.505 "num_base_bdevs": 4, 00:14:57.505 "num_base_bdevs_discovered": 3, 00:14:57.505 "num_base_bdevs_operational": 4, 00:14:57.505 "base_bdevs_list": [ 00:14:57.505 { 00:14:57.505 "name": "BaseBdev1", 00:14:57.505 "uuid": "7d16a270-dbdd-4936-acc9-6f168d905c4e", 00:14:57.505 "is_configured": true, 00:14:57.505 "data_offset": 2048, 00:14:57.505 "data_size": 63488 00:14:57.505 }, 00:14:57.505 { 00:14:57.505 "name": null, 00:14:57.505 "uuid": "b25fc5c9-1b40-46e7-90f1-013e5d9dc5f7", 00:14:57.505 "is_configured": false, 00:14:57.505 "data_offset": 0, 00:14:57.505 "data_size": 63488 00:14:57.505 }, 00:14:57.505 { 00:14:57.505 "name": "BaseBdev3", 00:14:57.505 "uuid": "4f3248a7-62c1-4444-ac2c-b8bfe6241865", 00:14:57.505 "is_configured": true, 00:14:57.505 "data_offset": 2048, 00:14:57.505 "data_size": 63488 00:14:57.505 }, 00:14:57.505 { 00:14:57.505 "name": "BaseBdev4", 00:14:57.505 "uuid": "329420eb-c36f-4f40-b4de-0cb3cff31054", 00:14:57.505 "is_configured": true, 00:14:57.505 "data_offset": 2048, 00:14:57.505 "data_size": 63488 00:14:57.505 } 00:14:57.505 ] 00:14:57.505 }' 00:14:57.505 04:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.505 04:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.766 04:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.766 04:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.766 04:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.766 04:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:58.025 04:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.025 04:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:58.026 04:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:58.026 04:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.026 04:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.026 [2024-12-13 04:30:57.824543] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:58.026 04:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.026 04:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:58.026 04:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:58.026 04:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:58.026 04:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:58.026 04:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.026 04:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:58.026 04:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.026 04:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.026 04:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.026 04:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.026 04:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.026 04:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.026 04:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.026 04:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.026 04:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.026 04:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.026 "name": "Existed_Raid", 00:14:58.026 "uuid": "e9c24eb4-7770-4974-bf5d-3eb74fadba90", 00:14:58.026 "strip_size_kb": 64, 00:14:58.026 "state": "configuring", 00:14:58.026 "raid_level": "raid5f", 00:14:58.026 "superblock": true, 00:14:58.026 "num_base_bdevs": 4, 00:14:58.026 "num_base_bdevs_discovered": 2, 00:14:58.026 "num_base_bdevs_operational": 4, 00:14:58.026 "base_bdevs_list": [ 00:14:58.026 { 00:14:58.026 "name": "BaseBdev1", 00:14:58.026 "uuid": "7d16a270-dbdd-4936-acc9-6f168d905c4e", 00:14:58.026 "is_configured": true, 00:14:58.026 "data_offset": 2048, 00:14:58.026 "data_size": 63488 00:14:58.026 }, 00:14:58.026 { 00:14:58.026 "name": null, 00:14:58.026 "uuid": "b25fc5c9-1b40-46e7-90f1-013e5d9dc5f7", 00:14:58.026 "is_configured": false, 00:14:58.026 "data_offset": 0, 00:14:58.026 "data_size": 63488 00:14:58.026 }, 00:14:58.026 { 00:14:58.026 "name": null, 00:14:58.026 "uuid": "4f3248a7-62c1-4444-ac2c-b8bfe6241865", 00:14:58.026 "is_configured": false, 00:14:58.026 "data_offset": 0, 00:14:58.026 "data_size": 63488 00:14:58.026 }, 00:14:58.026 { 00:14:58.026 "name": "BaseBdev4", 00:14:58.026 "uuid": "329420eb-c36f-4f40-b4de-0cb3cff31054", 00:14:58.026 "is_configured": true, 00:14:58.026 "data_offset": 2048, 00:14:58.026 "data_size": 63488 00:14:58.026 } 00:14:58.026 ] 00:14:58.026 }' 00:14:58.026 04:30:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.026 04:30:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.286 04:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:58.286 04:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.286 04:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.286 04:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.546 04:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.546 04:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:58.546 04:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:58.546 04:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.546 04:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.546 [2024-12-13 04:30:58.312554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:58.546 04:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.546 04:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:58.546 04:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:58.546 04:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:58.546 04:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:58.546 04:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.546 04:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:58.546 04:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.546 04:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.546 04:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.546 04:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.546 04:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.546 04:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.546 04:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.546 04:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.546 04:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.546 04:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.546 "name": "Existed_Raid", 00:14:58.546 "uuid": "e9c24eb4-7770-4974-bf5d-3eb74fadba90", 00:14:58.546 "strip_size_kb": 64, 00:14:58.546 "state": "configuring", 00:14:58.546 "raid_level": "raid5f", 00:14:58.546 "superblock": true, 00:14:58.546 "num_base_bdevs": 4, 00:14:58.546 "num_base_bdevs_discovered": 3, 00:14:58.546 "num_base_bdevs_operational": 4, 00:14:58.546 "base_bdevs_list": [ 00:14:58.546 { 00:14:58.546 "name": "BaseBdev1", 00:14:58.546 "uuid": "7d16a270-dbdd-4936-acc9-6f168d905c4e", 00:14:58.546 "is_configured": true, 00:14:58.546 "data_offset": 2048, 00:14:58.546 "data_size": 63488 00:14:58.546 }, 00:14:58.546 { 00:14:58.546 "name": null, 00:14:58.546 "uuid": "b25fc5c9-1b40-46e7-90f1-013e5d9dc5f7", 00:14:58.546 "is_configured": false, 00:14:58.546 "data_offset": 0, 00:14:58.546 "data_size": 63488 00:14:58.546 }, 00:14:58.546 { 00:14:58.546 "name": "BaseBdev3", 00:14:58.546 "uuid": "4f3248a7-62c1-4444-ac2c-b8bfe6241865", 00:14:58.546 "is_configured": true, 00:14:58.546 "data_offset": 2048, 00:14:58.546 "data_size": 63488 00:14:58.546 }, 00:14:58.546 { 00:14:58.546 "name": "BaseBdev4", 00:14:58.546 "uuid": "329420eb-c36f-4f40-b4de-0cb3cff31054", 00:14:58.546 "is_configured": true, 00:14:58.546 "data_offset": 2048, 00:14:58.546 "data_size": 63488 00:14:58.546 } 00:14:58.546 ] 00:14:58.546 }' 00:14:58.546 04:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.546 04:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.806 04:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:58.806 04:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.806 04:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.806 04:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.066 04:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.066 04:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:59.066 04:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:59.066 04:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.066 04:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.066 [2024-12-13 04:30:58.860582] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:59.066 04:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.066 04:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:59.066 04:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:59.066 04:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:59.066 04:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:59.066 04:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:59.066 04:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:59.066 04:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.066 04:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.066 04:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.066 04:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.066 04:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.066 04:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.066 04:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.066 04:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.066 04:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.066 04:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.066 "name": "Existed_Raid", 00:14:59.066 "uuid": "e9c24eb4-7770-4974-bf5d-3eb74fadba90", 00:14:59.066 "strip_size_kb": 64, 00:14:59.066 "state": "configuring", 00:14:59.066 "raid_level": "raid5f", 00:14:59.066 "superblock": true, 00:14:59.066 "num_base_bdevs": 4, 00:14:59.066 "num_base_bdevs_discovered": 2, 00:14:59.066 "num_base_bdevs_operational": 4, 00:14:59.066 "base_bdevs_list": [ 00:14:59.066 { 00:14:59.066 "name": null, 00:14:59.066 "uuid": "7d16a270-dbdd-4936-acc9-6f168d905c4e", 00:14:59.066 "is_configured": false, 00:14:59.066 "data_offset": 0, 00:14:59.066 "data_size": 63488 00:14:59.066 }, 00:14:59.066 { 00:14:59.066 "name": null, 00:14:59.066 "uuid": "b25fc5c9-1b40-46e7-90f1-013e5d9dc5f7", 00:14:59.066 "is_configured": false, 00:14:59.066 "data_offset": 0, 00:14:59.066 "data_size": 63488 00:14:59.066 }, 00:14:59.066 { 00:14:59.066 "name": "BaseBdev3", 00:14:59.066 "uuid": "4f3248a7-62c1-4444-ac2c-b8bfe6241865", 00:14:59.066 "is_configured": true, 00:14:59.066 "data_offset": 2048, 00:14:59.066 "data_size": 63488 00:14:59.066 }, 00:14:59.066 { 00:14:59.066 "name": "BaseBdev4", 00:14:59.066 "uuid": "329420eb-c36f-4f40-b4de-0cb3cff31054", 00:14:59.067 "is_configured": true, 00:14:59.067 "data_offset": 2048, 00:14:59.067 "data_size": 63488 00:14:59.067 } 00:14:59.067 ] 00:14:59.067 }' 00:14:59.067 04:30:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.067 04:30:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.326 04:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.326 04:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.326 04:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.326 04:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:59.326 04:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.586 04:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:59.586 04:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:59.587 04:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.587 04:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.587 [2024-12-13 04:30:59.371260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:59.587 04:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.587 04:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:59.587 04:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:59.587 04:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:59.587 04:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:59.587 04:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:59.587 04:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:59.587 04:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.587 04:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.587 04:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.587 04:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.587 04:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.587 04:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.587 04:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.587 04:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.587 04:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.587 04:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.587 "name": "Existed_Raid", 00:14:59.587 "uuid": "e9c24eb4-7770-4974-bf5d-3eb74fadba90", 00:14:59.587 "strip_size_kb": 64, 00:14:59.587 "state": "configuring", 00:14:59.587 "raid_level": "raid5f", 00:14:59.587 "superblock": true, 00:14:59.587 "num_base_bdevs": 4, 00:14:59.587 "num_base_bdevs_discovered": 3, 00:14:59.587 "num_base_bdevs_operational": 4, 00:14:59.587 "base_bdevs_list": [ 00:14:59.587 { 00:14:59.587 "name": null, 00:14:59.587 "uuid": "7d16a270-dbdd-4936-acc9-6f168d905c4e", 00:14:59.587 "is_configured": false, 00:14:59.587 "data_offset": 0, 00:14:59.587 "data_size": 63488 00:14:59.587 }, 00:14:59.587 { 00:14:59.587 "name": "BaseBdev2", 00:14:59.587 "uuid": "b25fc5c9-1b40-46e7-90f1-013e5d9dc5f7", 00:14:59.587 "is_configured": true, 00:14:59.587 "data_offset": 2048, 00:14:59.587 "data_size": 63488 00:14:59.587 }, 00:14:59.587 { 00:14:59.587 "name": "BaseBdev3", 00:14:59.587 "uuid": "4f3248a7-62c1-4444-ac2c-b8bfe6241865", 00:14:59.587 "is_configured": true, 00:14:59.587 "data_offset": 2048, 00:14:59.587 "data_size": 63488 00:14:59.587 }, 00:14:59.587 { 00:14:59.587 "name": "BaseBdev4", 00:14:59.587 "uuid": "329420eb-c36f-4f40-b4de-0cb3cff31054", 00:14:59.587 "is_configured": true, 00:14:59.587 "data_offset": 2048, 00:14:59.587 "data_size": 63488 00:14:59.587 } 00:14:59.587 ] 00:14:59.587 }' 00:14:59.587 04:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.587 04:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.847 04:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.847 04:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:59.847 04:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.847 04:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.107 04:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.107 04:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:00.107 04:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.107 04:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:00.107 04:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.107 04:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.107 04:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.107 04:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7d16a270-dbdd-4936-acc9-6f168d905c4e 00:15:00.107 04:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.107 04:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.107 [2024-12-13 04:30:59.972025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:00.107 [2024-12-13 04:30:59.972207] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:15:00.107 [2024-12-13 04:30:59.972220] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:00.107 [2024-12-13 04:30:59.972543] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:15:00.107 NewBaseBdev 00:15:00.107 [2024-12-13 04:30:59.973071] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:15:00.107 [2024-12-13 04:30:59.973087] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:15:00.107 [2024-12-13 04:30:59.973195] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:00.107 04:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.107 04:30:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:00.107 04:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:00.107 04:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:00.107 04:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:00.107 04:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:00.107 04:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:00.107 04:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:00.107 04:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.107 04:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.107 04:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.107 04:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:00.107 04:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.107 04:30:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.107 [ 00:15:00.107 { 00:15:00.107 "name": "NewBaseBdev", 00:15:00.107 "aliases": [ 00:15:00.107 "7d16a270-dbdd-4936-acc9-6f168d905c4e" 00:15:00.107 ], 00:15:00.107 "product_name": "Malloc disk", 00:15:00.107 "block_size": 512, 00:15:00.107 "num_blocks": 65536, 00:15:00.107 "uuid": "7d16a270-dbdd-4936-acc9-6f168d905c4e", 00:15:00.107 "assigned_rate_limits": { 00:15:00.107 "rw_ios_per_sec": 0, 00:15:00.107 "rw_mbytes_per_sec": 0, 00:15:00.107 "r_mbytes_per_sec": 0, 00:15:00.107 "w_mbytes_per_sec": 0 00:15:00.107 }, 00:15:00.107 "claimed": true, 00:15:00.107 "claim_type": "exclusive_write", 00:15:00.107 "zoned": false, 00:15:00.107 "supported_io_types": { 00:15:00.107 "read": true, 00:15:00.107 "write": true, 00:15:00.107 "unmap": true, 00:15:00.107 "flush": true, 00:15:00.107 "reset": true, 00:15:00.107 "nvme_admin": false, 00:15:00.107 "nvme_io": false, 00:15:00.107 "nvme_io_md": false, 00:15:00.107 "write_zeroes": true, 00:15:00.107 "zcopy": true, 00:15:00.107 "get_zone_info": false, 00:15:00.107 "zone_management": false, 00:15:00.107 "zone_append": false, 00:15:00.107 "compare": false, 00:15:00.107 "compare_and_write": false, 00:15:00.107 "abort": true, 00:15:00.107 "seek_hole": false, 00:15:00.107 "seek_data": false, 00:15:00.107 "copy": true, 00:15:00.107 "nvme_iov_md": false 00:15:00.107 }, 00:15:00.107 "memory_domains": [ 00:15:00.107 { 00:15:00.107 "dma_device_id": "system", 00:15:00.107 "dma_device_type": 1 00:15:00.107 }, 00:15:00.107 { 00:15:00.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.107 "dma_device_type": 2 00:15:00.107 } 00:15:00.107 ], 00:15:00.107 "driver_specific": {} 00:15:00.107 } 00:15:00.107 ] 00:15:00.107 04:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.107 04:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:00.107 04:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:00.107 04:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.107 04:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:00.107 04:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:00.107 04:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.107 04:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:00.107 04:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.107 04:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.107 04:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.107 04:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.107 04:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.107 04:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.107 04:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.107 04:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.107 04:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.107 04:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.107 "name": "Existed_Raid", 00:15:00.107 "uuid": "e9c24eb4-7770-4974-bf5d-3eb74fadba90", 00:15:00.107 "strip_size_kb": 64, 00:15:00.107 "state": "online", 00:15:00.107 "raid_level": "raid5f", 00:15:00.107 "superblock": true, 00:15:00.107 "num_base_bdevs": 4, 00:15:00.107 "num_base_bdevs_discovered": 4, 00:15:00.107 "num_base_bdevs_operational": 4, 00:15:00.107 "base_bdevs_list": [ 00:15:00.107 { 00:15:00.107 "name": "NewBaseBdev", 00:15:00.107 "uuid": "7d16a270-dbdd-4936-acc9-6f168d905c4e", 00:15:00.108 "is_configured": true, 00:15:00.108 "data_offset": 2048, 00:15:00.108 "data_size": 63488 00:15:00.108 }, 00:15:00.108 { 00:15:00.108 "name": "BaseBdev2", 00:15:00.108 "uuid": "b25fc5c9-1b40-46e7-90f1-013e5d9dc5f7", 00:15:00.108 "is_configured": true, 00:15:00.108 "data_offset": 2048, 00:15:00.108 "data_size": 63488 00:15:00.108 }, 00:15:00.108 { 00:15:00.108 "name": "BaseBdev3", 00:15:00.108 "uuid": "4f3248a7-62c1-4444-ac2c-b8bfe6241865", 00:15:00.108 "is_configured": true, 00:15:00.108 "data_offset": 2048, 00:15:00.108 "data_size": 63488 00:15:00.108 }, 00:15:00.108 { 00:15:00.108 "name": "BaseBdev4", 00:15:00.108 "uuid": "329420eb-c36f-4f40-b4de-0cb3cff31054", 00:15:00.108 "is_configured": true, 00:15:00.108 "data_offset": 2048, 00:15:00.108 "data_size": 63488 00:15:00.108 } 00:15:00.108 ] 00:15:00.108 }' 00:15:00.108 04:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.108 04:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.676 04:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:00.677 04:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:00.677 04:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:00.677 04:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:00.677 04:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:00.677 04:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:00.677 04:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:00.677 04:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:00.677 04:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.677 04:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.677 [2024-12-13 04:31:00.419473] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:00.677 04:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.677 04:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:00.677 "name": "Existed_Raid", 00:15:00.677 "aliases": [ 00:15:00.677 "e9c24eb4-7770-4974-bf5d-3eb74fadba90" 00:15:00.677 ], 00:15:00.677 "product_name": "Raid Volume", 00:15:00.677 "block_size": 512, 00:15:00.677 "num_blocks": 190464, 00:15:00.677 "uuid": "e9c24eb4-7770-4974-bf5d-3eb74fadba90", 00:15:00.677 "assigned_rate_limits": { 00:15:00.677 "rw_ios_per_sec": 0, 00:15:00.677 "rw_mbytes_per_sec": 0, 00:15:00.677 "r_mbytes_per_sec": 0, 00:15:00.677 "w_mbytes_per_sec": 0 00:15:00.677 }, 00:15:00.677 "claimed": false, 00:15:00.677 "zoned": false, 00:15:00.677 "supported_io_types": { 00:15:00.677 "read": true, 00:15:00.677 "write": true, 00:15:00.677 "unmap": false, 00:15:00.677 "flush": false, 00:15:00.677 "reset": true, 00:15:00.677 "nvme_admin": false, 00:15:00.677 "nvme_io": false, 00:15:00.677 "nvme_io_md": false, 00:15:00.677 "write_zeroes": true, 00:15:00.677 "zcopy": false, 00:15:00.677 "get_zone_info": false, 00:15:00.677 "zone_management": false, 00:15:00.677 "zone_append": false, 00:15:00.677 "compare": false, 00:15:00.677 "compare_and_write": false, 00:15:00.677 "abort": false, 00:15:00.677 "seek_hole": false, 00:15:00.677 "seek_data": false, 00:15:00.677 "copy": false, 00:15:00.677 "nvme_iov_md": false 00:15:00.677 }, 00:15:00.677 "driver_specific": { 00:15:00.677 "raid": { 00:15:00.677 "uuid": "e9c24eb4-7770-4974-bf5d-3eb74fadba90", 00:15:00.677 "strip_size_kb": 64, 00:15:00.677 "state": "online", 00:15:00.677 "raid_level": "raid5f", 00:15:00.677 "superblock": true, 00:15:00.677 "num_base_bdevs": 4, 00:15:00.677 "num_base_bdevs_discovered": 4, 00:15:00.677 "num_base_bdevs_operational": 4, 00:15:00.677 "base_bdevs_list": [ 00:15:00.677 { 00:15:00.677 "name": "NewBaseBdev", 00:15:00.677 "uuid": "7d16a270-dbdd-4936-acc9-6f168d905c4e", 00:15:00.677 "is_configured": true, 00:15:00.677 "data_offset": 2048, 00:15:00.677 "data_size": 63488 00:15:00.677 }, 00:15:00.677 { 00:15:00.677 "name": "BaseBdev2", 00:15:00.677 "uuid": "b25fc5c9-1b40-46e7-90f1-013e5d9dc5f7", 00:15:00.677 "is_configured": true, 00:15:00.677 "data_offset": 2048, 00:15:00.677 "data_size": 63488 00:15:00.677 }, 00:15:00.677 { 00:15:00.677 "name": "BaseBdev3", 00:15:00.677 "uuid": "4f3248a7-62c1-4444-ac2c-b8bfe6241865", 00:15:00.677 "is_configured": true, 00:15:00.677 "data_offset": 2048, 00:15:00.677 "data_size": 63488 00:15:00.677 }, 00:15:00.677 { 00:15:00.677 "name": "BaseBdev4", 00:15:00.677 "uuid": "329420eb-c36f-4f40-b4de-0cb3cff31054", 00:15:00.677 "is_configured": true, 00:15:00.677 "data_offset": 2048, 00:15:00.677 "data_size": 63488 00:15:00.677 } 00:15:00.677 ] 00:15:00.677 } 00:15:00.677 } 00:15:00.677 }' 00:15:00.677 04:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:00.677 04:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:00.677 BaseBdev2 00:15:00.677 BaseBdev3 00:15:00.677 BaseBdev4' 00:15:00.677 04:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:00.677 04:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:00.677 04:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:00.677 04:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:00.677 04:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:00.677 04:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.677 04:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.677 04:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.677 04:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:00.677 04:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:00.677 04:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:00.677 04:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:00.677 04:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.677 04:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.677 04:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:00.677 04:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.677 04:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:00.677 04:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:00.677 04:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:00.677 04:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:00.677 04:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:00.677 04:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.677 04:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.677 04:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.677 04:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:00.677 04:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:00.677 04:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:00.677 04:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:00.677 04:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:00.677 04:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.937 04:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.937 04:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.937 04:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:00.937 04:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:00.937 04:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:00.937 04:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.937 04:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.937 [2024-12-13 04:31:00.722772] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:00.937 [2024-12-13 04:31:00.722796] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:00.937 [2024-12-13 04:31:00.722870] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:00.937 [2024-12-13 04:31:00.723136] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:00.937 [2024-12-13 04:31:00.723147] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:15:00.937 04:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.937 04:31:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 95649 00:15:00.937 04:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 95649 ']' 00:15:00.937 04:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 95649 00:15:00.937 04:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:00.937 04:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:00.937 04:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95649 00:15:00.937 04:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:00.937 killing process with pid 95649 00:15:00.937 04:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:00.937 04:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95649' 00:15:00.937 04:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 95649 00:15:00.937 [2024-12-13 04:31:00.772998] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:00.937 04:31:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 95649 00:15:00.937 [2024-12-13 04:31:00.851022] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:01.197 04:31:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:01.197 00:15:01.197 real 0m9.934s 00:15:01.197 user 0m16.767s 00:15:01.197 sys 0m2.175s 00:15:01.197 04:31:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:01.197 ************************************ 00:15:01.197 END TEST raid5f_state_function_test_sb 00:15:01.197 ************************************ 00:15:01.197 04:31:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.457 04:31:01 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:15:01.457 04:31:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:01.457 04:31:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:01.457 04:31:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:01.457 ************************************ 00:15:01.457 START TEST raid5f_superblock_test 00:15:01.457 ************************************ 00:15:01.457 04:31:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:15:01.457 04:31:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:01.457 04:31:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:15:01.457 04:31:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:01.457 04:31:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:01.457 04:31:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:01.457 04:31:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:01.457 04:31:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:01.457 04:31:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:01.457 04:31:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:01.457 04:31:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:01.457 04:31:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:01.457 04:31:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:01.457 04:31:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:01.457 04:31:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:01.457 04:31:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:01.457 04:31:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:01.457 04:31:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=96298 00:15:01.457 04:31:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:01.457 04:31:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 96298 00:15:01.457 04:31:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 96298 ']' 00:15:01.457 04:31:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.457 04:31:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:01.457 04:31:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.457 04:31:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:01.457 04:31:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.457 [2024-12-13 04:31:01.349536] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:01.457 [2024-12-13 04:31:01.349782] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96298 ] 00:15:01.716 [2024-12-13 04:31:01.504365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.716 [2024-12-13 04:31:01.542066] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.716 [2024-12-13 04:31:01.619688] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:01.716 [2024-12-13 04:31:01.619816] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:02.287 04:31:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:02.287 04:31:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:02.287 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:02.287 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:02.287 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:02.287 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:02.287 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:02.287 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:02.287 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:02.287 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:02.287 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:02.287 04:31:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.287 04:31:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.287 malloc1 00:15:02.287 04:31:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.287 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:02.287 04:31:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.287 04:31:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.287 [2024-12-13 04:31:02.198024] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:02.287 [2024-12-13 04:31:02.198086] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.287 [2024-12-13 04:31:02.198110] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:02.287 [2024-12-13 04:31:02.198132] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.287 [2024-12-13 04:31:02.200570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.287 [2024-12-13 04:31:02.200684] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:02.287 pt1 00:15:02.287 04:31:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.287 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:02.287 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:02.287 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:02.287 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:02.287 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:02.287 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:02.287 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:02.287 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:02.287 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:02.287 04:31:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.287 04:31:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.287 malloc2 00:15:02.287 04:31:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.287 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:02.287 04:31:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.287 04:31:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.287 [2024-12-13 04:31:02.232742] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:02.287 [2024-12-13 04:31:02.232853] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.287 [2024-12-13 04:31:02.232890] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:02.287 [2024-12-13 04:31:02.232924] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.287 [2024-12-13 04:31:02.235278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.287 [2024-12-13 04:31:02.235345] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:02.287 pt2 00:15:02.287 04:31:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.287 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:02.287 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:02.287 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:02.287 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:02.287 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:02.287 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:02.287 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:02.287 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:02.287 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:02.287 04:31:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.287 04:31:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.287 malloc3 00:15:02.287 04:31:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.288 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:02.288 04:31:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.288 04:31:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.288 [2024-12-13 04:31:02.271372] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:02.288 [2024-12-13 04:31:02.271469] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.288 [2024-12-13 04:31:02.271508] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:02.288 [2024-12-13 04:31:02.271537] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.288 [2024-12-13 04:31:02.273861] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.288 [2024-12-13 04:31:02.273927] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:02.288 pt3 00:15:02.288 04:31:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.288 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:02.288 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:02.288 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:15:02.288 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:15:02.288 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:02.288 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:02.288 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:02.288 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:02.288 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:15:02.288 04:31:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.288 04:31:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.548 malloc4 00:15:02.548 04:31:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.548 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:02.548 04:31:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.548 04:31:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.548 [2024-12-13 04:31:02.327056] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:02.548 [2024-12-13 04:31:02.327189] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.548 [2024-12-13 04:31:02.327241] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:02.548 [2024-12-13 04:31:02.327301] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.548 [2024-12-13 04:31:02.330790] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.548 [2024-12-13 04:31:02.330896] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:02.548 pt4 00:15:02.548 04:31:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.548 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:02.548 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:02.548 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:15:02.548 04:31:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.548 04:31:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.548 [2024-12-13 04:31:02.339122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:02.548 [2024-12-13 04:31:02.341422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:02.548 [2024-12-13 04:31:02.341513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:02.548 [2024-12-13 04:31:02.341590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:02.548 [2024-12-13 04:31:02.341787] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:15:02.548 [2024-12-13 04:31:02.341803] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:02.548 [2024-12-13 04:31:02.342098] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:15:02.548 [2024-12-13 04:31:02.342725] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:15:02.548 [2024-12-13 04:31:02.342780] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:15:02.548 [2024-12-13 04:31:02.343046] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.548 04:31:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.548 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:02.548 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:02.548 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.548 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:02.548 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.548 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:02.548 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.548 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.548 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.548 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.548 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.548 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.548 04:31:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.548 04:31:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.548 04:31:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.548 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.548 "name": "raid_bdev1", 00:15:02.548 "uuid": "20b63048-33d7-4840-baa4-2aae51a7765c", 00:15:02.548 "strip_size_kb": 64, 00:15:02.548 "state": "online", 00:15:02.548 "raid_level": "raid5f", 00:15:02.548 "superblock": true, 00:15:02.548 "num_base_bdevs": 4, 00:15:02.548 "num_base_bdevs_discovered": 4, 00:15:02.548 "num_base_bdevs_operational": 4, 00:15:02.548 "base_bdevs_list": [ 00:15:02.548 { 00:15:02.548 "name": "pt1", 00:15:02.548 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:02.548 "is_configured": true, 00:15:02.548 "data_offset": 2048, 00:15:02.548 "data_size": 63488 00:15:02.548 }, 00:15:02.548 { 00:15:02.548 "name": "pt2", 00:15:02.548 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:02.548 "is_configured": true, 00:15:02.548 "data_offset": 2048, 00:15:02.548 "data_size": 63488 00:15:02.548 }, 00:15:02.548 { 00:15:02.548 "name": "pt3", 00:15:02.548 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:02.548 "is_configured": true, 00:15:02.548 "data_offset": 2048, 00:15:02.548 "data_size": 63488 00:15:02.548 }, 00:15:02.548 { 00:15:02.548 "name": "pt4", 00:15:02.548 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:02.548 "is_configured": true, 00:15:02.548 "data_offset": 2048, 00:15:02.548 "data_size": 63488 00:15:02.548 } 00:15:02.548 ] 00:15:02.548 }' 00:15:02.548 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.548 04:31:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.807 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:02.807 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:02.807 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:02.807 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:02.807 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:02.807 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:02.807 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:02.807 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:02.807 04:31:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.807 04:31:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.808 [2024-12-13 04:31:02.778568] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:02.808 04:31:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.808 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:02.808 "name": "raid_bdev1", 00:15:02.808 "aliases": [ 00:15:02.808 "20b63048-33d7-4840-baa4-2aae51a7765c" 00:15:02.808 ], 00:15:02.808 "product_name": "Raid Volume", 00:15:02.808 "block_size": 512, 00:15:02.808 "num_blocks": 190464, 00:15:02.808 "uuid": "20b63048-33d7-4840-baa4-2aae51a7765c", 00:15:02.808 "assigned_rate_limits": { 00:15:02.808 "rw_ios_per_sec": 0, 00:15:02.808 "rw_mbytes_per_sec": 0, 00:15:02.808 "r_mbytes_per_sec": 0, 00:15:02.808 "w_mbytes_per_sec": 0 00:15:02.808 }, 00:15:02.808 "claimed": false, 00:15:02.808 "zoned": false, 00:15:02.808 "supported_io_types": { 00:15:02.808 "read": true, 00:15:02.808 "write": true, 00:15:02.808 "unmap": false, 00:15:02.808 "flush": false, 00:15:02.808 "reset": true, 00:15:02.808 "nvme_admin": false, 00:15:02.808 "nvme_io": false, 00:15:02.808 "nvme_io_md": false, 00:15:02.808 "write_zeroes": true, 00:15:02.808 "zcopy": false, 00:15:02.808 "get_zone_info": false, 00:15:02.808 "zone_management": false, 00:15:02.808 "zone_append": false, 00:15:02.808 "compare": false, 00:15:02.808 "compare_and_write": false, 00:15:02.808 "abort": false, 00:15:02.808 "seek_hole": false, 00:15:02.808 "seek_data": false, 00:15:02.808 "copy": false, 00:15:02.808 "nvme_iov_md": false 00:15:02.808 }, 00:15:02.808 "driver_specific": { 00:15:02.808 "raid": { 00:15:02.808 "uuid": "20b63048-33d7-4840-baa4-2aae51a7765c", 00:15:02.808 "strip_size_kb": 64, 00:15:02.808 "state": "online", 00:15:02.808 "raid_level": "raid5f", 00:15:02.808 "superblock": true, 00:15:02.808 "num_base_bdevs": 4, 00:15:02.808 "num_base_bdevs_discovered": 4, 00:15:02.808 "num_base_bdevs_operational": 4, 00:15:02.808 "base_bdevs_list": [ 00:15:02.808 { 00:15:02.808 "name": "pt1", 00:15:02.808 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:02.808 "is_configured": true, 00:15:02.808 "data_offset": 2048, 00:15:02.808 "data_size": 63488 00:15:02.808 }, 00:15:02.808 { 00:15:02.808 "name": "pt2", 00:15:02.808 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:02.808 "is_configured": true, 00:15:02.808 "data_offset": 2048, 00:15:02.808 "data_size": 63488 00:15:02.808 }, 00:15:02.808 { 00:15:02.808 "name": "pt3", 00:15:02.808 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:02.808 "is_configured": true, 00:15:02.808 "data_offset": 2048, 00:15:02.808 "data_size": 63488 00:15:02.808 }, 00:15:02.808 { 00:15:02.808 "name": "pt4", 00:15:02.808 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:02.808 "is_configured": true, 00:15:02.808 "data_offset": 2048, 00:15:02.808 "data_size": 63488 00:15:02.808 } 00:15:02.808 ] 00:15:02.808 } 00:15:02.808 } 00:15:02.808 }' 00:15:02.808 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:03.066 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:03.067 pt2 00:15:03.067 pt3 00:15:03.067 pt4' 00:15:03.067 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.067 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:03.067 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:03.067 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:03.067 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.067 04:31:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.067 04:31:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.067 04:31:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.067 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:03.067 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:03.067 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:03.067 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:03.067 04:31:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.067 04:31:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.067 04:31:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.067 04:31:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.067 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:03.067 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:03.067 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:03.067 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:03.067 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.067 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.067 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.067 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.067 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:03.067 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:03.067 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:03.067 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:03.067 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.067 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.067 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.067 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.327 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:03.327 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:03.327 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:03.327 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:03.327 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.327 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.327 [2024-12-13 04:31:03.101959] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:03.327 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.327 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=20b63048-33d7-4840-baa4-2aae51a7765c 00:15:03.327 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 20b63048-33d7-4840-baa4-2aae51a7765c ']' 00:15:03.327 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:03.327 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.327 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.327 [2024-12-13 04:31:03.145744] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:03.327 [2024-12-13 04:31:03.145771] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:03.327 [2024-12-13 04:31:03.145847] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:03.327 [2024-12-13 04:31:03.145923] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:03.327 [2024-12-13 04:31:03.145932] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:15:03.327 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.327 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.327 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:03.327 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.327 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.327 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.327 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:03.327 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:03.327 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:03.327 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:03.327 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.327 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.327 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.327 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:03.327 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:03.327 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.327 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.327 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.327 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:03.327 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:03.327 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.327 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.327 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.327 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:03.327 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:15:03.328 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.328 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.328 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.328 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:03.328 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:03.328 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.328 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.328 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.328 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:03.328 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:03.328 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:03.328 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:03.328 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:03.328 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:03.328 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:03.328 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:03.328 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:03.328 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.328 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.328 [2024-12-13 04:31:03.317583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:03.328 [2024-12-13 04:31:03.319671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:03.328 [2024-12-13 04:31:03.319715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:03.328 [2024-12-13 04:31:03.319742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:03.328 [2024-12-13 04:31:03.319780] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:03.328 [2024-12-13 04:31:03.319826] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:03.328 [2024-12-13 04:31:03.319844] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:03.328 [2024-12-13 04:31:03.319859] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:03.328 [2024-12-13 04:31:03.319872] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:03.328 [2024-12-13 04:31:03.319882] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:15:03.328 request: 00:15:03.328 { 00:15:03.328 "name": "raid_bdev1", 00:15:03.328 "raid_level": "raid5f", 00:15:03.328 "base_bdevs": [ 00:15:03.328 "malloc1", 00:15:03.328 "malloc2", 00:15:03.328 "malloc3", 00:15:03.328 "malloc4" 00:15:03.328 ], 00:15:03.328 "strip_size_kb": 64, 00:15:03.328 "superblock": false, 00:15:03.328 "method": "bdev_raid_create", 00:15:03.328 "req_id": 1 00:15:03.328 } 00:15:03.328 Got JSON-RPC error response 00:15:03.328 response: 00:15:03.328 { 00:15:03.328 "code": -17, 00:15:03.328 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:03.328 } 00:15:03.328 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:03.328 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:03.328 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:03.328 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:03.328 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:03.328 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.328 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:03.328 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.328 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.588 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.588 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:03.588 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:03.588 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:03.588 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.589 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.589 [2024-12-13 04:31:03.381451] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:03.589 [2024-12-13 04:31:03.381538] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.589 [2024-12-13 04:31:03.381579] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:03.589 [2024-12-13 04:31:03.381607] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.589 [2024-12-13 04:31:03.383975] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.589 [2024-12-13 04:31:03.384041] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:03.589 [2024-12-13 04:31:03.384118] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:03.589 [2024-12-13 04:31:03.384183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:03.589 pt1 00:15:03.589 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.589 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:15:03.589 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.589 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:03.589 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:03.589 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.589 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:03.589 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.589 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.589 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.589 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.589 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.589 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.589 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.589 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.589 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.589 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.589 "name": "raid_bdev1", 00:15:03.589 "uuid": "20b63048-33d7-4840-baa4-2aae51a7765c", 00:15:03.589 "strip_size_kb": 64, 00:15:03.589 "state": "configuring", 00:15:03.589 "raid_level": "raid5f", 00:15:03.589 "superblock": true, 00:15:03.589 "num_base_bdevs": 4, 00:15:03.589 "num_base_bdevs_discovered": 1, 00:15:03.589 "num_base_bdevs_operational": 4, 00:15:03.589 "base_bdevs_list": [ 00:15:03.589 { 00:15:03.589 "name": "pt1", 00:15:03.589 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:03.589 "is_configured": true, 00:15:03.589 "data_offset": 2048, 00:15:03.589 "data_size": 63488 00:15:03.589 }, 00:15:03.589 { 00:15:03.589 "name": null, 00:15:03.589 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:03.589 "is_configured": false, 00:15:03.589 "data_offset": 2048, 00:15:03.589 "data_size": 63488 00:15:03.589 }, 00:15:03.589 { 00:15:03.589 "name": null, 00:15:03.589 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:03.589 "is_configured": false, 00:15:03.589 "data_offset": 2048, 00:15:03.589 "data_size": 63488 00:15:03.589 }, 00:15:03.589 { 00:15:03.589 "name": null, 00:15:03.589 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:03.589 "is_configured": false, 00:15:03.589 "data_offset": 2048, 00:15:03.589 "data_size": 63488 00:15:03.589 } 00:15:03.589 ] 00:15:03.589 }' 00:15:03.589 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.589 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.849 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:15:03.849 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:03.849 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.849 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.849 [2024-12-13 04:31:03.844592] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:03.849 [2024-12-13 04:31:03.844688] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.850 [2024-12-13 04:31:03.844707] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:03.850 [2024-12-13 04:31:03.844715] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.850 [2024-12-13 04:31:03.845054] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.850 [2024-12-13 04:31:03.845070] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:03.850 [2024-12-13 04:31:03.845120] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:03.850 [2024-12-13 04:31:03.845136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:03.850 pt2 00:15:03.850 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.850 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:03.850 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.850 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.850 [2024-12-13 04:31:03.856610] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:03.850 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.850 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:15:03.850 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.850 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:03.850 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:03.850 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.850 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:03.850 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.850 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.850 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.850 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.110 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.110 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.110 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.110 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.110 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.110 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.110 "name": "raid_bdev1", 00:15:04.110 "uuid": "20b63048-33d7-4840-baa4-2aae51a7765c", 00:15:04.110 "strip_size_kb": 64, 00:15:04.110 "state": "configuring", 00:15:04.110 "raid_level": "raid5f", 00:15:04.110 "superblock": true, 00:15:04.110 "num_base_bdevs": 4, 00:15:04.110 "num_base_bdevs_discovered": 1, 00:15:04.110 "num_base_bdevs_operational": 4, 00:15:04.110 "base_bdevs_list": [ 00:15:04.110 { 00:15:04.110 "name": "pt1", 00:15:04.110 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:04.110 "is_configured": true, 00:15:04.110 "data_offset": 2048, 00:15:04.110 "data_size": 63488 00:15:04.110 }, 00:15:04.110 { 00:15:04.110 "name": null, 00:15:04.110 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:04.110 "is_configured": false, 00:15:04.110 "data_offset": 0, 00:15:04.110 "data_size": 63488 00:15:04.110 }, 00:15:04.110 { 00:15:04.110 "name": null, 00:15:04.110 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:04.110 "is_configured": false, 00:15:04.110 "data_offset": 2048, 00:15:04.110 "data_size": 63488 00:15:04.110 }, 00:15:04.110 { 00:15:04.110 "name": null, 00:15:04.110 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:04.110 "is_configured": false, 00:15:04.110 "data_offset": 2048, 00:15:04.110 "data_size": 63488 00:15:04.110 } 00:15:04.110 ] 00:15:04.110 }' 00:15:04.110 04:31:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.110 04:31:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.370 04:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:04.370 04:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:04.370 04:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:04.370 04:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.370 04:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.370 [2024-12-13 04:31:04.328549] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:04.370 [2024-12-13 04:31:04.328656] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.370 [2024-12-13 04:31:04.328684] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:04.370 [2024-12-13 04:31:04.328712] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.371 [2024-12-13 04:31:04.329051] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.371 [2024-12-13 04:31:04.329111] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:04.371 [2024-12-13 04:31:04.329186] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:04.371 [2024-12-13 04:31:04.329240] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:04.371 pt2 00:15:04.371 04:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.371 04:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:04.371 04:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:04.371 04:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:04.371 04:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.371 04:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.371 [2024-12-13 04:31:04.340542] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:04.371 [2024-12-13 04:31:04.340635] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.371 [2024-12-13 04:31:04.340673] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:04.371 [2024-12-13 04:31:04.340700] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.371 [2024-12-13 04:31:04.341063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.371 [2024-12-13 04:31:04.341120] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:04.371 [2024-12-13 04:31:04.341191] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:04.371 [2024-12-13 04:31:04.341238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:04.371 pt3 00:15:04.371 04:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.371 04:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:04.371 04:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:04.371 04:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:04.371 04:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.371 04:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.371 [2024-12-13 04:31:04.352542] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:04.371 [2024-12-13 04:31:04.352589] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.371 [2024-12-13 04:31:04.352600] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:04.371 [2024-12-13 04:31:04.352609] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.371 [2024-12-13 04:31:04.352885] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.371 [2024-12-13 04:31:04.352903] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:04.371 [2024-12-13 04:31:04.352947] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:04.371 [2024-12-13 04:31:04.352965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:04.371 [2024-12-13 04:31:04.353073] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:15:04.371 [2024-12-13 04:31:04.353088] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:04.371 [2024-12-13 04:31:04.353309] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:15:04.371 [2024-12-13 04:31:04.353793] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:15:04.371 [2024-12-13 04:31:04.353811] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:15:04.371 [2024-12-13 04:31:04.353900] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:04.371 pt4 00:15:04.371 04:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.371 04:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:04.371 04:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:04.371 04:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:04.371 04:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:04.371 04:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:04.371 04:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:04.371 04:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:04.371 04:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:04.371 04:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.371 04:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.371 04:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.371 04:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.371 04:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.371 04:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.371 04:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.371 04:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.631 04:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.631 04:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.631 "name": "raid_bdev1", 00:15:04.631 "uuid": "20b63048-33d7-4840-baa4-2aae51a7765c", 00:15:04.631 "strip_size_kb": 64, 00:15:04.631 "state": "online", 00:15:04.631 "raid_level": "raid5f", 00:15:04.631 "superblock": true, 00:15:04.631 "num_base_bdevs": 4, 00:15:04.631 "num_base_bdevs_discovered": 4, 00:15:04.631 "num_base_bdevs_operational": 4, 00:15:04.631 "base_bdevs_list": [ 00:15:04.631 { 00:15:04.631 "name": "pt1", 00:15:04.631 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:04.631 "is_configured": true, 00:15:04.631 "data_offset": 2048, 00:15:04.631 "data_size": 63488 00:15:04.631 }, 00:15:04.631 { 00:15:04.631 "name": "pt2", 00:15:04.631 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:04.631 "is_configured": true, 00:15:04.631 "data_offset": 2048, 00:15:04.631 "data_size": 63488 00:15:04.631 }, 00:15:04.631 { 00:15:04.631 "name": "pt3", 00:15:04.631 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:04.631 "is_configured": true, 00:15:04.631 "data_offset": 2048, 00:15:04.631 "data_size": 63488 00:15:04.631 }, 00:15:04.631 { 00:15:04.631 "name": "pt4", 00:15:04.631 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:04.631 "is_configured": true, 00:15:04.632 "data_offset": 2048, 00:15:04.632 "data_size": 63488 00:15:04.632 } 00:15:04.632 ] 00:15:04.632 }' 00:15:04.632 04:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.632 04:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.892 04:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:04.892 04:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:04.892 04:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:04.892 04:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:04.892 04:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:04.892 04:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:04.892 04:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:04.892 04:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:04.892 04:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.892 04:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.892 [2024-12-13 04:31:04.848710] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:04.892 04:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.892 04:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:04.892 "name": "raid_bdev1", 00:15:04.892 "aliases": [ 00:15:04.892 "20b63048-33d7-4840-baa4-2aae51a7765c" 00:15:04.892 ], 00:15:04.892 "product_name": "Raid Volume", 00:15:04.892 "block_size": 512, 00:15:04.892 "num_blocks": 190464, 00:15:04.892 "uuid": "20b63048-33d7-4840-baa4-2aae51a7765c", 00:15:04.892 "assigned_rate_limits": { 00:15:04.892 "rw_ios_per_sec": 0, 00:15:04.892 "rw_mbytes_per_sec": 0, 00:15:04.892 "r_mbytes_per_sec": 0, 00:15:04.892 "w_mbytes_per_sec": 0 00:15:04.892 }, 00:15:04.892 "claimed": false, 00:15:04.892 "zoned": false, 00:15:04.892 "supported_io_types": { 00:15:04.892 "read": true, 00:15:04.892 "write": true, 00:15:04.892 "unmap": false, 00:15:04.892 "flush": false, 00:15:04.892 "reset": true, 00:15:04.892 "nvme_admin": false, 00:15:04.892 "nvme_io": false, 00:15:04.892 "nvme_io_md": false, 00:15:04.892 "write_zeroes": true, 00:15:04.892 "zcopy": false, 00:15:04.892 "get_zone_info": false, 00:15:04.892 "zone_management": false, 00:15:04.892 "zone_append": false, 00:15:04.892 "compare": false, 00:15:04.892 "compare_and_write": false, 00:15:04.892 "abort": false, 00:15:04.892 "seek_hole": false, 00:15:04.892 "seek_data": false, 00:15:04.892 "copy": false, 00:15:04.892 "nvme_iov_md": false 00:15:04.892 }, 00:15:04.892 "driver_specific": { 00:15:04.892 "raid": { 00:15:04.892 "uuid": "20b63048-33d7-4840-baa4-2aae51a7765c", 00:15:04.892 "strip_size_kb": 64, 00:15:04.892 "state": "online", 00:15:04.892 "raid_level": "raid5f", 00:15:04.892 "superblock": true, 00:15:04.892 "num_base_bdevs": 4, 00:15:04.892 "num_base_bdevs_discovered": 4, 00:15:04.892 "num_base_bdevs_operational": 4, 00:15:04.892 "base_bdevs_list": [ 00:15:04.892 { 00:15:04.892 "name": "pt1", 00:15:04.892 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:04.892 "is_configured": true, 00:15:04.892 "data_offset": 2048, 00:15:04.892 "data_size": 63488 00:15:04.892 }, 00:15:04.892 { 00:15:04.892 "name": "pt2", 00:15:04.892 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:04.892 "is_configured": true, 00:15:04.892 "data_offset": 2048, 00:15:04.892 "data_size": 63488 00:15:04.892 }, 00:15:04.892 { 00:15:04.892 "name": "pt3", 00:15:04.892 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:04.892 "is_configured": true, 00:15:04.892 "data_offset": 2048, 00:15:04.892 "data_size": 63488 00:15:04.892 }, 00:15:04.892 { 00:15:04.892 "name": "pt4", 00:15:04.892 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:04.892 "is_configured": true, 00:15:04.892 "data_offset": 2048, 00:15:04.892 "data_size": 63488 00:15:04.892 } 00:15:04.892 ] 00:15:04.892 } 00:15:04.892 } 00:15:04.892 }' 00:15:04.892 04:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:05.152 04:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:05.152 pt2 00:15:05.152 pt3 00:15:05.152 pt4' 00:15:05.152 04:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:05.152 04:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:05.152 04:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:05.152 04:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:05.152 04:31:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:05.152 04:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.152 04:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.152 04:31:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.152 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:05.152 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:05.152 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:05.152 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:05.152 04:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.152 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:05.152 04:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.152 04:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.152 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:05.152 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:05.152 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:05.152 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:05.152 04:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.152 04:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.152 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:05.152 04:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.152 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:05.152 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:05.152 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:05.152 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:05.152 04:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.152 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:05.152 04:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.152 04:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.152 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:05.153 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:05.153 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:05.412 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:05.412 04:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.412 04:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.412 [2024-12-13 04:31:05.172705] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:05.412 04:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.412 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 20b63048-33d7-4840-baa4-2aae51a7765c '!=' 20b63048-33d7-4840-baa4-2aae51a7765c ']' 00:15:05.412 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:05.412 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:05.412 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:05.412 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:05.412 04:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.412 04:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.412 [2024-12-13 04:31:05.220583] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:05.412 04:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.412 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:05.412 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.412 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:05.412 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:05.412 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.412 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:05.412 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.412 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.412 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.412 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.412 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.412 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.412 04:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.412 04:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.412 04:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.412 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.412 "name": "raid_bdev1", 00:15:05.412 "uuid": "20b63048-33d7-4840-baa4-2aae51a7765c", 00:15:05.412 "strip_size_kb": 64, 00:15:05.412 "state": "online", 00:15:05.412 "raid_level": "raid5f", 00:15:05.412 "superblock": true, 00:15:05.412 "num_base_bdevs": 4, 00:15:05.412 "num_base_bdevs_discovered": 3, 00:15:05.412 "num_base_bdevs_operational": 3, 00:15:05.412 "base_bdevs_list": [ 00:15:05.412 { 00:15:05.412 "name": null, 00:15:05.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.412 "is_configured": false, 00:15:05.412 "data_offset": 0, 00:15:05.412 "data_size": 63488 00:15:05.412 }, 00:15:05.412 { 00:15:05.412 "name": "pt2", 00:15:05.412 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:05.412 "is_configured": true, 00:15:05.412 "data_offset": 2048, 00:15:05.412 "data_size": 63488 00:15:05.412 }, 00:15:05.412 { 00:15:05.412 "name": "pt3", 00:15:05.412 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:05.412 "is_configured": true, 00:15:05.412 "data_offset": 2048, 00:15:05.412 "data_size": 63488 00:15:05.412 }, 00:15:05.412 { 00:15:05.412 "name": "pt4", 00:15:05.412 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:05.412 "is_configured": true, 00:15:05.412 "data_offset": 2048, 00:15:05.412 "data_size": 63488 00:15:05.412 } 00:15:05.412 ] 00:15:05.412 }' 00:15:05.412 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.412 04:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.672 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:05.672 04:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.672 04:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.672 [2024-12-13 04:31:05.664544] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:05.672 [2024-12-13 04:31:05.664604] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:05.672 [2024-12-13 04:31:05.664675] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:05.672 [2024-12-13 04:31:05.664744] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:05.672 [2024-12-13 04:31:05.664837] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:15:05.672 04:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.672 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.672 04:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.672 04:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.672 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:05.672 04:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.932 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:05.932 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:05.932 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:05.932 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:05.932 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:05.932 04:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.932 04:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.932 04:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.932 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:05.932 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:05.933 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:05.933 04:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.933 04:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.933 04:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.933 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:05.933 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:05.933 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:15:05.933 04:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.933 04:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.933 04:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.933 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:05.933 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:05.933 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:05.933 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:05.933 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:05.933 04:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.933 04:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.933 [2024-12-13 04:31:05.760550] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:05.933 [2024-12-13 04:31:05.760590] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.933 [2024-12-13 04:31:05.760602] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:05.933 [2024-12-13 04:31:05.760612] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.933 [2024-12-13 04:31:05.762959] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.933 [2024-12-13 04:31:05.762994] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:05.933 [2024-12-13 04:31:05.763044] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:05.933 [2024-12-13 04:31:05.763071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:05.933 pt2 00:15:05.933 04:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.933 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:05.933 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.933 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:05.933 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:05.933 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.933 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:05.933 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.933 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.933 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.933 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.933 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.933 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.933 04:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.933 04:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.933 04:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.933 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.933 "name": "raid_bdev1", 00:15:05.933 "uuid": "20b63048-33d7-4840-baa4-2aae51a7765c", 00:15:05.933 "strip_size_kb": 64, 00:15:05.933 "state": "configuring", 00:15:05.933 "raid_level": "raid5f", 00:15:05.933 "superblock": true, 00:15:05.933 "num_base_bdevs": 4, 00:15:05.933 "num_base_bdevs_discovered": 1, 00:15:05.933 "num_base_bdevs_operational": 3, 00:15:05.933 "base_bdevs_list": [ 00:15:05.933 { 00:15:05.933 "name": null, 00:15:05.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.933 "is_configured": false, 00:15:05.933 "data_offset": 2048, 00:15:05.933 "data_size": 63488 00:15:05.933 }, 00:15:05.933 { 00:15:05.933 "name": "pt2", 00:15:05.933 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:05.933 "is_configured": true, 00:15:05.933 "data_offset": 2048, 00:15:05.933 "data_size": 63488 00:15:05.933 }, 00:15:05.933 { 00:15:05.933 "name": null, 00:15:05.933 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:05.933 "is_configured": false, 00:15:05.933 "data_offset": 2048, 00:15:05.933 "data_size": 63488 00:15:05.933 }, 00:15:05.933 { 00:15:05.933 "name": null, 00:15:05.933 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:05.933 "is_configured": false, 00:15:05.933 "data_offset": 2048, 00:15:05.933 "data_size": 63488 00:15:05.933 } 00:15:05.933 ] 00:15:05.933 }' 00:15:05.933 04:31:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.933 04:31:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.504 04:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:06.504 04:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:06.504 04:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:06.504 04:31:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.504 04:31:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.504 [2024-12-13 04:31:06.220551] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:06.504 [2024-12-13 04:31:06.220651] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.504 [2024-12-13 04:31:06.220679] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:06.504 [2024-12-13 04:31:06.220708] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.504 [2024-12-13 04:31:06.221044] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.504 [2024-12-13 04:31:06.221102] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:06.504 [2024-12-13 04:31:06.221183] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:06.504 [2024-12-13 04:31:06.221232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:06.504 pt3 00:15:06.504 04:31:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.504 04:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:06.504 04:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:06.504 04:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:06.504 04:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:06.504 04:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.504 04:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:06.504 04:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.504 04:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.504 04:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.504 04:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.504 04:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.504 04:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.504 04:31:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.504 04:31:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.504 04:31:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.504 04:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.504 "name": "raid_bdev1", 00:15:06.504 "uuid": "20b63048-33d7-4840-baa4-2aae51a7765c", 00:15:06.504 "strip_size_kb": 64, 00:15:06.504 "state": "configuring", 00:15:06.504 "raid_level": "raid5f", 00:15:06.504 "superblock": true, 00:15:06.504 "num_base_bdevs": 4, 00:15:06.504 "num_base_bdevs_discovered": 2, 00:15:06.504 "num_base_bdevs_operational": 3, 00:15:06.504 "base_bdevs_list": [ 00:15:06.504 { 00:15:06.504 "name": null, 00:15:06.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.504 "is_configured": false, 00:15:06.504 "data_offset": 2048, 00:15:06.504 "data_size": 63488 00:15:06.504 }, 00:15:06.504 { 00:15:06.504 "name": "pt2", 00:15:06.504 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:06.504 "is_configured": true, 00:15:06.504 "data_offset": 2048, 00:15:06.504 "data_size": 63488 00:15:06.504 }, 00:15:06.504 { 00:15:06.504 "name": "pt3", 00:15:06.504 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:06.504 "is_configured": true, 00:15:06.504 "data_offset": 2048, 00:15:06.504 "data_size": 63488 00:15:06.504 }, 00:15:06.504 { 00:15:06.504 "name": null, 00:15:06.504 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:06.504 "is_configured": false, 00:15:06.504 "data_offset": 2048, 00:15:06.504 "data_size": 63488 00:15:06.504 } 00:15:06.504 ] 00:15:06.504 }' 00:15:06.504 04:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.504 04:31:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.764 04:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:06.764 04:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:06.764 04:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:15:06.764 04:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:06.764 04:31:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.764 04:31:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.764 [2024-12-13 04:31:06.668540] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:06.764 [2024-12-13 04:31:06.668628] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.764 [2024-12-13 04:31:06.668644] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:06.764 [2024-12-13 04:31:06.668654] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.764 [2024-12-13 04:31:06.668988] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.764 [2024-12-13 04:31:06.669008] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:06.764 [2024-12-13 04:31:06.669055] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:06.764 [2024-12-13 04:31:06.669073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:06.764 [2024-12-13 04:31:06.669154] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:15:06.764 [2024-12-13 04:31:06.669168] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:06.764 [2024-12-13 04:31:06.669403] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:15:06.764 [2024-12-13 04:31:06.669975] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:15:06.764 [2024-12-13 04:31:06.669994] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:15:06.764 [2024-12-13 04:31:06.670197] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:06.764 pt4 00:15:06.764 04:31:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.764 04:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:06.764 04:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:06.764 04:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:06.764 04:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:06.764 04:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.764 04:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:06.764 04:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.764 04:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.764 04:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.764 04:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.764 04:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.764 04:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.764 04:31:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.764 04:31:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.764 04:31:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.764 04:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.764 "name": "raid_bdev1", 00:15:06.764 "uuid": "20b63048-33d7-4840-baa4-2aae51a7765c", 00:15:06.764 "strip_size_kb": 64, 00:15:06.764 "state": "online", 00:15:06.764 "raid_level": "raid5f", 00:15:06.764 "superblock": true, 00:15:06.764 "num_base_bdevs": 4, 00:15:06.764 "num_base_bdevs_discovered": 3, 00:15:06.765 "num_base_bdevs_operational": 3, 00:15:06.765 "base_bdevs_list": [ 00:15:06.765 { 00:15:06.765 "name": null, 00:15:06.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.765 "is_configured": false, 00:15:06.765 "data_offset": 2048, 00:15:06.765 "data_size": 63488 00:15:06.765 }, 00:15:06.765 { 00:15:06.765 "name": "pt2", 00:15:06.765 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:06.765 "is_configured": true, 00:15:06.765 "data_offset": 2048, 00:15:06.765 "data_size": 63488 00:15:06.765 }, 00:15:06.765 { 00:15:06.765 "name": "pt3", 00:15:06.765 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:06.765 "is_configured": true, 00:15:06.765 "data_offset": 2048, 00:15:06.765 "data_size": 63488 00:15:06.765 }, 00:15:06.765 { 00:15:06.765 "name": "pt4", 00:15:06.765 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:06.765 "is_configured": true, 00:15:06.765 "data_offset": 2048, 00:15:06.765 "data_size": 63488 00:15:06.765 } 00:15:06.765 ] 00:15:06.765 }' 00:15:06.765 04:31:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.765 04:31:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.335 04:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:07.335 04:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.335 04:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.335 [2024-12-13 04:31:07.076520] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:07.335 [2024-12-13 04:31:07.076586] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:07.335 [2024-12-13 04:31:07.076666] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:07.335 [2024-12-13 04:31:07.076741] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:07.335 [2024-12-13 04:31:07.076824] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:15:07.335 04:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.335 04:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.335 04:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:07.335 04:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.335 04:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.335 04:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.335 04:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:07.335 04:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:07.335 04:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:15:07.335 04:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:15:07.335 04:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:15:07.335 04:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.335 04:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.335 04:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.335 04:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:07.335 04:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.335 04:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.335 [2024-12-13 04:31:07.148576] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:07.335 [2024-12-13 04:31:07.148665] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.335 [2024-12-13 04:31:07.148700] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:07.335 [2024-12-13 04:31:07.148726] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.335 [2024-12-13 04:31:07.151133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.335 [2024-12-13 04:31:07.151196] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:07.335 [2024-12-13 04:31:07.151288] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:07.335 [2024-12-13 04:31:07.151333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:07.335 [2024-12-13 04:31:07.151435] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:07.335 [2024-12-13 04:31:07.151525] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:07.335 [2024-12-13 04:31:07.151569] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:15:07.335 [2024-12-13 04:31:07.151650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:07.335 [2024-12-13 04:31:07.151802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:07.335 pt1 00:15:07.335 04:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.335 04:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:15:07.335 04:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:07.335 04:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:07.335 04:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:07.335 04:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:07.335 04:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:07.335 04:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:07.335 04:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.335 04:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.335 04:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.335 04:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.335 04:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.335 04:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.335 04:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.335 04:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.335 04:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.335 04:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.335 "name": "raid_bdev1", 00:15:07.335 "uuid": "20b63048-33d7-4840-baa4-2aae51a7765c", 00:15:07.335 "strip_size_kb": 64, 00:15:07.335 "state": "configuring", 00:15:07.335 "raid_level": "raid5f", 00:15:07.335 "superblock": true, 00:15:07.335 "num_base_bdevs": 4, 00:15:07.335 "num_base_bdevs_discovered": 2, 00:15:07.335 "num_base_bdevs_operational": 3, 00:15:07.335 "base_bdevs_list": [ 00:15:07.335 { 00:15:07.335 "name": null, 00:15:07.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.335 "is_configured": false, 00:15:07.335 "data_offset": 2048, 00:15:07.335 "data_size": 63488 00:15:07.335 }, 00:15:07.335 { 00:15:07.335 "name": "pt2", 00:15:07.335 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:07.335 "is_configured": true, 00:15:07.335 "data_offset": 2048, 00:15:07.335 "data_size": 63488 00:15:07.335 }, 00:15:07.335 { 00:15:07.335 "name": "pt3", 00:15:07.335 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:07.335 "is_configured": true, 00:15:07.335 "data_offset": 2048, 00:15:07.335 "data_size": 63488 00:15:07.335 }, 00:15:07.335 { 00:15:07.335 "name": null, 00:15:07.335 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:07.335 "is_configured": false, 00:15:07.335 "data_offset": 2048, 00:15:07.335 "data_size": 63488 00:15:07.335 } 00:15:07.335 ] 00:15:07.335 }' 00:15:07.335 04:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.335 04:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.905 04:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:07.905 04:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.905 04:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:07.905 04:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.905 04:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.905 04:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:07.905 04:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:07.905 04:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.905 04:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.905 [2024-12-13 04:31:07.668544] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:07.905 [2024-12-13 04:31:07.668652] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.905 [2024-12-13 04:31:07.668684] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:07.905 [2024-12-13 04:31:07.668713] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.905 [2024-12-13 04:31:07.669070] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.905 [2024-12-13 04:31:07.669128] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:07.905 [2024-12-13 04:31:07.669208] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:07.905 [2024-12-13 04:31:07.669259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:07.905 [2024-12-13 04:31:07.669364] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:15:07.905 [2024-12-13 04:31:07.669405] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:07.905 [2024-12-13 04:31:07.669675] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:15:07.905 [2024-12-13 04:31:07.670250] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:15:07.905 [2024-12-13 04:31:07.670306] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:15:07.905 [2024-12-13 04:31:07.670543] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:07.905 pt4 00:15:07.906 04:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.906 04:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:07.906 04:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:07.906 04:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:07.906 04:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:07.906 04:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:07.906 04:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:07.906 04:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.906 04:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.906 04:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.906 04:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.906 04:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.906 04:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.906 04:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.906 04:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.906 04:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.906 04:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.906 "name": "raid_bdev1", 00:15:07.906 "uuid": "20b63048-33d7-4840-baa4-2aae51a7765c", 00:15:07.906 "strip_size_kb": 64, 00:15:07.906 "state": "online", 00:15:07.906 "raid_level": "raid5f", 00:15:07.906 "superblock": true, 00:15:07.906 "num_base_bdevs": 4, 00:15:07.906 "num_base_bdevs_discovered": 3, 00:15:07.906 "num_base_bdevs_operational": 3, 00:15:07.906 "base_bdevs_list": [ 00:15:07.906 { 00:15:07.906 "name": null, 00:15:07.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.906 "is_configured": false, 00:15:07.906 "data_offset": 2048, 00:15:07.906 "data_size": 63488 00:15:07.906 }, 00:15:07.906 { 00:15:07.906 "name": "pt2", 00:15:07.906 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:07.906 "is_configured": true, 00:15:07.906 "data_offset": 2048, 00:15:07.906 "data_size": 63488 00:15:07.906 }, 00:15:07.906 { 00:15:07.906 "name": "pt3", 00:15:07.906 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:07.906 "is_configured": true, 00:15:07.906 "data_offset": 2048, 00:15:07.906 "data_size": 63488 00:15:07.906 }, 00:15:07.906 { 00:15:07.906 "name": "pt4", 00:15:07.906 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:07.906 "is_configured": true, 00:15:07.906 "data_offset": 2048, 00:15:07.906 "data_size": 63488 00:15:07.906 } 00:15:07.906 ] 00:15:07.906 }' 00:15:07.906 04:31:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.906 04:31:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.165 04:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:08.165 04:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:08.165 04:31:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.165 04:31:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.165 04:31:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.425 04:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:08.425 04:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:08.425 04:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:08.425 04:31:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.425 04:31:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.425 [2024-12-13 04:31:08.196720] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:08.425 04:31:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.425 04:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 20b63048-33d7-4840-baa4-2aae51a7765c '!=' 20b63048-33d7-4840-baa4-2aae51a7765c ']' 00:15:08.425 04:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 96298 00:15:08.425 04:31:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 96298 ']' 00:15:08.425 04:31:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 96298 00:15:08.425 04:31:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:08.425 04:31:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:08.425 04:31:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96298 00:15:08.425 04:31:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:08.425 04:31:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:08.425 killing process with pid 96298 00:15:08.425 04:31:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96298' 00:15:08.425 04:31:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 96298 00:15:08.425 [2024-12-13 04:31:08.270409] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:08.425 [2024-12-13 04:31:08.270486] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:08.425 [2024-12-13 04:31:08.270553] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:08.425 [2024-12-13 04:31:08.270562] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:15:08.425 04:31:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 96298 00:15:08.425 [2024-12-13 04:31:08.350751] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:08.684 04:31:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:08.684 00:15:08.684 real 0m7.421s 00:15:08.684 user 0m12.322s 00:15:08.684 sys 0m1.670s 00:15:08.684 04:31:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:08.684 ************************************ 00:15:08.684 END TEST raid5f_superblock_test 00:15:08.684 ************************************ 00:15:08.684 04:31:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.944 04:31:08 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:08.944 04:31:08 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:15:08.944 04:31:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:08.944 04:31:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:08.944 04:31:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:08.944 ************************************ 00:15:08.944 START TEST raid5f_rebuild_test 00:15:08.944 ************************************ 00:15:08.944 04:31:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:15:08.945 04:31:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:08.945 04:31:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:08.945 04:31:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:08.945 04:31:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:08.945 04:31:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:08.945 04:31:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:08.945 04:31:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:08.945 04:31:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:08.945 04:31:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:08.945 04:31:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:08.945 04:31:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:08.945 04:31:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:08.945 04:31:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:08.945 04:31:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:08.945 04:31:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:08.945 04:31:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:08.945 04:31:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:08.945 04:31:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:08.945 04:31:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:08.945 04:31:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:08.945 04:31:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:08.945 04:31:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:08.945 04:31:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:08.945 04:31:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:08.945 04:31:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:08.945 04:31:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:08.945 04:31:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:08.945 04:31:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:08.945 04:31:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:08.945 04:31:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:08.945 04:31:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:08.945 04:31:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=96779 00:15:08.945 04:31:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:08.945 04:31:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 96779 00:15:08.945 04:31:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 96779 ']' 00:15:08.945 04:31:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.945 04:31:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:08.945 04:31:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.945 04:31:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:08.945 04:31:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.945 [2024-12-13 04:31:08.865601] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:08.945 [2024-12-13 04:31:08.865836] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96779 ] 00:15:08.945 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:08.945 Zero copy mechanism will not be used. 00:15:09.204 [2024-12-13 04:31:09.020875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.205 [2024-12-13 04:31:09.059080] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.205 [2024-12-13 04:31:09.136458] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:09.205 [2024-12-13 04:31:09.136492] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:09.774 04:31:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:09.774 04:31:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:15:09.774 04:31:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:09.774 04:31:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:09.774 04:31:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.774 04:31:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.774 BaseBdev1_malloc 00:15:09.774 04:31:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.774 04:31:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:09.774 04:31:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.774 04:31:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.774 [2024-12-13 04:31:09.706818] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:09.774 [2024-12-13 04:31:09.706880] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.774 [2024-12-13 04:31:09.706917] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:09.774 [2024-12-13 04:31:09.706930] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.774 [2024-12-13 04:31:09.709303] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.774 [2024-12-13 04:31:09.709379] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:09.774 BaseBdev1 00:15:09.774 04:31:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.774 04:31:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:09.774 04:31:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:09.774 04:31:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.774 04:31:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.774 BaseBdev2_malloc 00:15:09.774 04:31:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.774 04:31:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:09.774 04:31:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.774 04:31:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.774 [2024-12-13 04:31:09.741641] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:09.774 [2024-12-13 04:31:09.741692] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.774 [2024-12-13 04:31:09.741715] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:09.774 [2024-12-13 04:31:09.741723] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.774 [2024-12-13 04:31:09.743994] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.774 [2024-12-13 04:31:09.744026] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:09.774 BaseBdev2 00:15:09.774 04:31:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.774 04:31:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:09.774 04:31:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:09.774 04:31:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.774 04:31:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.774 BaseBdev3_malloc 00:15:09.774 04:31:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.774 04:31:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:09.774 04:31:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.774 04:31:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.774 [2024-12-13 04:31:09.776212] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:09.774 [2024-12-13 04:31:09.776318] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.774 [2024-12-13 04:31:09.776366] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:09.774 [2024-12-13 04:31:09.776404] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.774 [2024-12-13 04:31:09.778869] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.774 [2024-12-13 04:31:09.778946] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:09.774 BaseBdev3 00:15:09.774 04:31:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.774 04:31:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:09.774 04:31:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:09.774 04:31:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.774 04:31:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.034 BaseBdev4_malloc 00:15:10.035 04:31:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.035 04:31:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:10.035 04:31:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.035 04:31:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.035 [2024-12-13 04:31:09.820911] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:10.035 [2024-12-13 04:31:09.820998] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.035 [2024-12-13 04:31:09.821040] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:10.035 [2024-12-13 04:31:09.821073] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.035 [2024-12-13 04:31:09.823408] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.035 [2024-12-13 04:31:09.823500] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:10.035 BaseBdev4 00:15:10.035 04:31:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.035 04:31:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:10.035 04:31:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.035 04:31:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.035 spare_malloc 00:15:10.035 04:31:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.035 04:31:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:10.035 04:31:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.035 04:31:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.035 spare_delay 00:15:10.035 04:31:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.035 04:31:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:10.035 04:31:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.035 04:31:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.035 [2024-12-13 04:31:09.867434] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:10.035 [2024-12-13 04:31:09.867485] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.035 [2024-12-13 04:31:09.867504] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:10.035 [2024-12-13 04:31:09.867512] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.035 [2024-12-13 04:31:09.869914] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.035 [2024-12-13 04:31:09.869951] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:10.035 spare 00:15:10.035 04:31:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.035 04:31:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:10.035 04:31:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.035 04:31:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.035 [2024-12-13 04:31:09.879513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:10.035 [2024-12-13 04:31:09.881643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:10.035 [2024-12-13 04:31:09.881762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:10.035 [2024-12-13 04:31:09.881843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:10.035 [2024-12-13 04:31:09.881934] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:15:10.035 [2024-12-13 04:31:09.881943] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:10.035 [2024-12-13 04:31:09.882198] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:15:10.035 [2024-12-13 04:31:09.882703] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:15:10.035 [2024-12-13 04:31:09.882719] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:15:10.035 [2024-12-13 04:31:09.882841] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:10.035 04:31:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.035 04:31:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:10.035 04:31:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.035 04:31:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:10.035 04:31:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:10.035 04:31:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:10.035 04:31:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:10.035 04:31:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.035 04:31:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.035 04:31:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.035 04:31:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.035 04:31:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.035 04:31:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.035 04:31:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.035 04:31:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.035 04:31:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.035 04:31:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.035 "name": "raid_bdev1", 00:15:10.035 "uuid": "cc3b5234-1e85-4a11-a49e-d6063d345944", 00:15:10.035 "strip_size_kb": 64, 00:15:10.035 "state": "online", 00:15:10.035 "raid_level": "raid5f", 00:15:10.035 "superblock": false, 00:15:10.035 "num_base_bdevs": 4, 00:15:10.035 "num_base_bdevs_discovered": 4, 00:15:10.035 "num_base_bdevs_operational": 4, 00:15:10.035 "base_bdevs_list": [ 00:15:10.035 { 00:15:10.035 "name": "BaseBdev1", 00:15:10.035 "uuid": "9d3eb6ca-673b-5d1f-b6a6-f4c602fec9e8", 00:15:10.035 "is_configured": true, 00:15:10.035 "data_offset": 0, 00:15:10.035 "data_size": 65536 00:15:10.035 }, 00:15:10.035 { 00:15:10.035 "name": "BaseBdev2", 00:15:10.035 "uuid": "ce7a4151-103c-51f0-b99e-97c1db9cbed3", 00:15:10.035 "is_configured": true, 00:15:10.035 "data_offset": 0, 00:15:10.035 "data_size": 65536 00:15:10.035 }, 00:15:10.035 { 00:15:10.035 "name": "BaseBdev3", 00:15:10.035 "uuid": "5ad43904-1316-5c05-80d0-e27105314f2c", 00:15:10.035 "is_configured": true, 00:15:10.035 "data_offset": 0, 00:15:10.035 "data_size": 65536 00:15:10.035 }, 00:15:10.035 { 00:15:10.035 "name": "BaseBdev4", 00:15:10.035 "uuid": "305d56c5-be7d-520d-aa44-7ca0b0ebc3e8", 00:15:10.035 "is_configured": true, 00:15:10.035 "data_offset": 0, 00:15:10.035 "data_size": 65536 00:15:10.035 } 00:15:10.035 ] 00:15:10.035 }' 00:15:10.035 04:31:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.035 04:31:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.605 04:31:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:10.605 04:31:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:10.605 04:31:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.605 04:31:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.605 [2024-12-13 04:31:10.329122] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:10.605 04:31:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.605 04:31:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:15:10.605 04:31:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.605 04:31:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:10.605 04:31:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.605 04:31:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.605 04:31:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.605 04:31:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:10.605 04:31:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:10.605 04:31:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:10.605 04:31:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:10.605 04:31:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:10.605 04:31:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:10.605 04:31:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:10.605 04:31:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:10.605 04:31:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:10.605 04:31:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:10.605 04:31:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:10.605 04:31:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:10.605 04:31:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:10.605 04:31:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:10.605 [2024-12-13 04:31:10.604659] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:15:10.876 /dev/nbd0 00:15:10.876 04:31:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:10.876 04:31:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:10.876 04:31:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:10.876 04:31:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:10.876 04:31:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:10.876 04:31:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:10.876 04:31:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:10.876 04:31:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:10.876 04:31:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:10.876 04:31:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:10.876 04:31:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:10.876 1+0 records in 00:15:10.876 1+0 records out 00:15:10.876 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378908 s, 10.8 MB/s 00:15:10.876 04:31:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:10.876 04:31:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:10.876 04:31:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:10.876 04:31:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:10.876 04:31:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:10.876 04:31:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:10.876 04:31:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:10.876 04:31:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:10.876 04:31:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:15:10.876 04:31:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:15:10.876 04:31:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:15:11.530 512+0 records in 00:15:11.530 512+0 records out 00:15:11.530 100663296 bytes (101 MB, 96 MiB) copied, 0.638544 s, 158 MB/s 00:15:11.530 04:31:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:11.531 04:31:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:11.531 04:31:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:11.531 04:31:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:11.531 04:31:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:11.531 04:31:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:11.531 04:31:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:11.531 04:31:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:11.531 [2024-12-13 04:31:11.536777] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:11.531 04:31:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:11.531 04:31:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:11.531 04:31:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:11.531 04:31:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:11.531 04:31:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:11.790 04:31:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:11.790 04:31:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:11.790 04:31:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:11.790 04:31:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.790 04:31:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.790 [2024-12-13 04:31:11.556834] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:11.791 04:31:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.791 04:31:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:11.791 04:31:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.791 04:31:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:11.791 04:31:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:11.791 04:31:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.791 04:31:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.791 04:31:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.791 04:31:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.791 04:31:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.791 04:31:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.791 04:31:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.791 04:31:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.791 04:31:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.791 04:31:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.791 04:31:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.791 04:31:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.791 "name": "raid_bdev1", 00:15:11.791 "uuid": "cc3b5234-1e85-4a11-a49e-d6063d345944", 00:15:11.791 "strip_size_kb": 64, 00:15:11.791 "state": "online", 00:15:11.791 "raid_level": "raid5f", 00:15:11.791 "superblock": false, 00:15:11.791 "num_base_bdevs": 4, 00:15:11.791 "num_base_bdevs_discovered": 3, 00:15:11.791 "num_base_bdevs_operational": 3, 00:15:11.791 "base_bdevs_list": [ 00:15:11.791 { 00:15:11.791 "name": null, 00:15:11.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.791 "is_configured": false, 00:15:11.791 "data_offset": 0, 00:15:11.791 "data_size": 65536 00:15:11.791 }, 00:15:11.791 { 00:15:11.791 "name": "BaseBdev2", 00:15:11.791 "uuid": "ce7a4151-103c-51f0-b99e-97c1db9cbed3", 00:15:11.791 "is_configured": true, 00:15:11.791 "data_offset": 0, 00:15:11.791 "data_size": 65536 00:15:11.791 }, 00:15:11.791 { 00:15:11.791 "name": "BaseBdev3", 00:15:11.791 "uuid": "5ad43904-1316-5c05-80d0-e27105314f2c", 00:15:11.791 "is_configured": true, 00:15:11.791 "data_offset": 0, 00:15:11.791 "data_size": 65536 00:15:11.791 }, 00:15:11.791 { 00:15:11.791 "name": "BaseBdev4", 00:15:11.791 "uuid": "305d56c5-be7d-520d-aa44-7ca0b0ebc3e8", 00:15:11.791 "is_configured": true, 00:15:11.791 "data_offset": 0, 00:15:11.791 "data_size": 65536 00:15:11.791 } 00:15:11.791 ] 00:15:11.791 }' 00:15:11.791 04:31:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.791 04:31:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.051 04:31:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:12.051 04:31:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.051 04:31:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.051 [2024-12-13 04:31:12.036576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:12.051 [2024-12-13 04:31:12.043884] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027da0 00:15:12.051 04:31:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.051 04:31:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:12.051 [2024-12-13 04:31:12.046540] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:13.433 04:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:13.433 04:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.433 04:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:13.433 04:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:13.433 04:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.433 04:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.433 04:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.433 04:31:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.433 04:31:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.433 04:31:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.433 04:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.433 "name": "raid_bdev1", 00:15:13.433 "uuid": "cc3b5234-1e85-4a11-a49e-d6063d345944", 00:15:13.433 "strip_size_kb": 64, 00:15:13.433 "state": "online", 00:15:13.433 "raid_level": "raid5f", 00:15:13.433 "superblock": false, 00:15:13.433 "num_base_bdevs": 4, 00:15:13.433 "num_base_bdevs_discovered": 4, 00:15:13.433 "num_base_bdevs_operational": 4, 00:15:13.433 "process": { 00:15:13.433 "type": "rebuild", 00:15:13.433 "target": "spare", 00:15:13.433 "progress": { 00:15:13.433 "blocks": 19200, 00:15:13.433 "percent": 9 00:15:13.433 } 00:15:13.433 }, 00:15:13.433 "base_bdevs_list": [ 00:15:13.433 { 00:15:13.433 "name": "spare", 00:15:13.433 "uuid": "698b5c1d-c47f-51be-aec4-8384372079f9", 00:15:13.433 "is_configured": true, 00:15:13.433 "data_offset": 0, 00:15:13.433 "data_size": 65536 00:15:13.433 }, 00:15:13.433 { 00:15:13.433 "name": "BaseBdev2", 00:15:13.433 "uuid": "ce7a4151-103c-51f0-b99e-97c1db9cbed3", 00:15:13.433 "is_configured": true, 00:15:13.433 "data_offset": 0, 00:15:13.433 "data_size": 65536 00:15:13.433 }, 00:15:13.433 { 00:15:13.433 "name": "BaseBdev3", 00:15:13.433 "uuid": "5ad43904-1316-5c05-80d0-e27105314f2c", 00:15:13.433 "is_configured": true, 00:15:13.433 "data_offset": 0, 00:15:13.433 "data_size": 65536 00:15:13.433 }, 00:15:13.433 { 00:15:13.433 "name": "BaseBdev4", 00:15:13.433 "uuid": "305d56c5-be7d-520d-aa44-7ca0b0ebc3e8", 00:15:13.433 "is_configured": true, 00:15:13.433 "data_offset": 0, 00:15:13.433 "data_size": 65536 00:15:13.433 } 00:15:13.433 ] 00:15:13.433 }' 00:15:13.433 04:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.433 04:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:13.433 04:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.433 04:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:13.433 04:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:13.433 04:31:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.433 04:31:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.433 [2024-12-13 04:31:13.161942] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:13.433 [2024-12-13 04:31:13.253051] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:13.433 [2024-12-13 04:31:13.253106] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:13.433 [2024-12-13 04:31:13.253128] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:13.433 [2024-12-13 04:31:13.253135] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:13.433 04:31:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.433 04:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:13.433 04:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:13.433 04:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:13.433 04:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:13.433 04:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.433 04:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:13.433 04:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.433 04:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.433 04:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.433 04:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.434 04:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.434 04:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.434 04:31:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.434 04:31:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.434 04:31:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.434 04:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.434 "name": "raid_bdev1", 00:15:13.434 "uuid": "cc3b5234-1e85-4a11-a49e-d6063d345944", 00:15:13.434 "strip_size_kb": 64, 00:15:13.434 "state": "online", 00:15:13.434 "raid_level": "raid5f", 00:15:13.434 "superblock": false, 00:15:13.434 "num_base_bdevs": 4, 00:15:13.434 "num_base_bdevs_discovered": 3, 00:15:13.434 "num_base_bdevs_operational": 3, 00:15:13.434 "base_bdevs_list": [ 00:15:13.434 { 00:15:13.434 "name": null, 00:15:13.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.434 "is_configured": false, 00:15:13.434 "data_offset": 0, 00:15:13.434 "data_size": 65536 00:15:13.434 }, 00:15:13.434 { 00:15:13.434 "name": "BaseBdev2", 00:15:13.434 "uuid": "ce7a4151-103c-51f0-b99e-97c1db9cbed3", 00:15:13.434 "is_configured": true, 00:15:13.434 "data_offset": 0, 00:15:13.434 "data_size": 65536 00:15:13.434 }, 00:15:13.434 { 00:15:13.434 "name": "BaseBdev3", 00:15:13.434 "uuid": "5ad43904-1316-5c05-80d0-e27105314f2c", 00:15:13.434 "is_configured": true, 00:15:13.434 "data_offset": 0, 00:15:13.434 "data_size": 65536 00:15:13.434 }, 00:15:13.434 { 00:15:13.434 "name": "BaseBdev4", 00:15:13.434 "uuid": "305d56c5-be7d-520d-aa44-7ca0b0ebc3e8", 00:15:13.434 "is_configured": true, 00:15:13.434 "data_offset": 0, 00:15:13.434 "data_size": 65536 00:15:13.434 } 00:15:13.434 ] 00:15:13.434 }' 00:15:13.434 04:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.434 04:31:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.004 04:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:14.004 04:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.004 04:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:14.004 04:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:14.004 04:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.004 04:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.004 04:31:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.004 04:31:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.004 04:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.004 04:31:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.004 04:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.004 "name": "raid_bdev1", 00:15:14.004 "uuid": "cc3b5234-1e85-4a11-a49e-d6063d345944", 00:15:14.004 "strip_size_kb": 64, 00:15:14.004 "state": "online", 00:15:14.004 "raid_level": "raid5f", 00:15:14.004 "superblock": false, 00:15:14.004 "num_base_bdevs": 4, 00:15:14.004 "num_base_bdevs_discovered": 3, 00:15:14.004 "num_base_bdevs_operational": 3, 00:15:14.004 "base_bdevs_list": [ 00:15:14.004 { 00:15:14.004 "name": null, 00:15:14.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.004 "is_configured": false, 00:15:14.004 "data_offset": 0, 00:15:14.004 "data_size": 65536 00:15:14.004 }, 00:15:14.004 { 00:15:14.004 "name": "BaseBdev2", 00:15:14.004 "uuid": "ce7a4151-103c-51f0-b99e-97c1db9cbed3", 00:15:14.004 "is_configured": true, 00:15:14.004 "data_offset": 0, 00:15:14.004 "data_size": 65536 00:15:14.004 }, 00:15:14.004 { 00:15:14.004 "name": "BaseBdev3", 00:15:14.004 "uuid": "5ad43904-1316-5c05-80d0-e27105314f2c", 00:15:14.004 "is_configured": true, 00:15:14.004 "data_offset": 0, 00:15:14.004 "data_size": 65536 00:15:14.004 }, 00:15:14.004 { 00:15:14.004 "name": "BaseBdev4", 00:15:14.004 "uuid": "305d56c5-be7d-520d-aa44-7ca0b0ebc3e8", 00:15:14.004 "is_configured": true, 00:15:14.004 "data_offset": 0, 00:15:14.004 "data_size": 65536 00:15:14.004 } 00:15:14.004 ] 00:15:14.004 }' 00:15:14.004 04:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.004 04:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:14.004 04:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.004 04:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:14.004 04:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:14.004 04:31:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.004 04:31:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.004 [2024-12-13 04:31:13.900952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:14.004 [2024-12-13 04:31:13.905971] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027e70 00:15:14.004 04:31:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.004 04:31:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:14.004 [2024-12-13 04:31:13.908538] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:14.944 04:31:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:14.944 04:31:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.944 04:31:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:14.944 04:31:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:14.944 04:31:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.944 04:31:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.944 04:31:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.944 04:31:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.944 04:31:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.944 04:31:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.204 04:31:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.204 "name": "raid_bdev1", 00:15:15.204 "uuid": "cc3b5234-1e85-4a11-a49e-d6063d345944", 00:15:15.204 "strip_size_kb": 64, 00:15:15.204 "state": "online", 00:15:15.204 "raid_level": "raid5f", 00:15:15.204 "superblock": false, 00:15:15.204 "num_base_bdevs": 4, 00:15:15.204 "num_base_bdevs_discovered": 4, 00:15:15.204 "num_base_bdevs_operational": 4, 00:15:15.204 "process": { 00:15:15.204 "type": "rebuild", 00:15:15.204 "target": "spare", 00:15:15.204 "progress": { 00:15:15.204 "blocks": 19200, 00:15:15.204 "percent": 9 00:15:15.204 } 00:15:15.204 }, 00:15:15.204 "base_bdevs_list": [ 00:15:15.204 { 00:15:15.204 "name": "spare", 00:15:15.204 "uuid": "698b5c1d-c47f-51be-aec4-8384372079f9", 00:15:15.204 "is_configured": true, 00:15:15.204 "data_offset": 0, 00:15:15.204 "data_size": 65536 00:15:15.204 }, 00:15:15.204 { 00:15:15.204 "name": "BaseBdev2", 00:15:15.204 "uuid": "ce7a4151-103c-51f0-b99e-97c1db9cbed3", 00:15:15.204 "is_configured": true, 00:15:15.204 "data_offset": 0, 00:15:15.204 "data_size": 65536 00:15:15.204 }, 00:15:15.204 { 00:15:15.204 "name": "BaseBdev3", 00:15:15.204 "uuid": "5ad43904-1316-5c05-80d0-e27105314f2c", 00:15:15.204 "is_configured": true, 00:15:15.204 "data_offset": 0, 00:15:15.204 "data_size": 65536 00:15:15.204 }, 00:15:15.204 { 00:15:15.204 "name": "BaseBdev4", 00:15:15.204 "uuid": "305d56c5-be7d-520d-aa44-7ca0b0ebc3e8", 00:15:15.204 "is_configured": true, 00:15:15.204 "data_offset": 0, 00:15:15.204 "data_size": 65536 00:15:15.204 } 00:15:15.204 ] 00:15:15.204 }' 00:15:15.204 04:31:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.204 04:31:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:15.204 04:31:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.204 04:31:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:15.204 04:31:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:15.204 04:31:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:15.204 04:31:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:15.204 04:31:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=524 00:15:15.204 04:31:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:15.204 04:31:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:15.204 04:31:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.204 04:31:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:15.204 04:31:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:15.204 04:31:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.204 04:31:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.204 04:31:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.204 04:31:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.204 04:31:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.204 04:31:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.204 04:31:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.204 "name": "raid_bdev1", 00:15:15.204 "uuid": "cc3b5234-1e85-4a11-a49e-d6063d345944", 00:15:15.204 "strip_size_kb": 64, 00:15:15.204 "state": "online", 00:15:15.204 "raid_level": "raid5f", 00:15:15.204 "superblock": false, 00:15:15.204 "num_base_bdevs": 4, 00:15:15.204 "num_base_bdevs_discovered": 4, 00:15:15.204 "num_base_bdevs_operational": 4, 00:15:15.204 "process": { 00:15:15.204 "type": "rebuild", 00:15:15.204 "target": "spare", 00:15:15.204 "progress": { 00:15:15.204 "blocks": 21120, 00:15:15.204 "percent": 10 00:15:15.204 } 00:15:15.204 }, 00:15:15.204 "base_bdevs_list": [ 00:15:15.204 { 00:15:15.204 "name": "spare", 00:15:15.204 "uuid": "698b5c1d-c47f-51be-aec4-8384372079f9", 00:15:15.204 "is_configured": true, 00:15:15.204 "data_offset": 0, 00:15:15.204 "data_size": 65536 00:15:15.204 }, 00:15:15.204 { 00:15:15.204 "name": "BaseBdev2", 00:15:15.204 "uuid": "ce7a4151-103c-51f0-b99e-97c1db9cbed3", 00:15:15.204 "is_configured": true, 00:15:15.204 "data_offset": 0, 00:15:15.204 "data_size": 65536 00:15:15.204 }, 00:15:15.204 { 00:15:15.204 "name": "BaseBdev3", 00:15:15.204 "uuid": "5ad43904-1316-5c05-80d0-e27105314f2c", 00:15:15.204 "is_configured": true, 00:15:15.204 "data_offset": 0, 00:15:15.204 "data_size": 65536 00:15:15.204 }, 00:15:15.204 { 00:15:15.204 "name": "BaseBdev4", 00:15:15.204 "uuid": "305d56c5-be7d-520d-aa44-7ca0b0ebc3e8", 00:15:15.204 "is_configured": true, 00:15:15.204 "data_offset": 0, 00:15:15.204 "data_size": 65536 00:15:15.204 } 00:15:15.204 ] 00:15:15.204 }' 00:15:15.204 04:31:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.204 04:31:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:15.204 04:31:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.204 04:31:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:15.204 04:31:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:16.586 04:31:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:16.586 04:31:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:16.586 04:31:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:16.586 04:31:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:16.586 04:31:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:16.586 04:31:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.586 04:31:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.586 04:31:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.586 04:31:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.586 04:31:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.586 04:31:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.586 04:31:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:16.586 "name": "raid_bdev1", 00:15:16.586 "uuid": "cc3b5234-1e85-4a11-a49e-d6063d345944", 00:15:16.586 "strip_size_kb": 64, 00:15:16.586 "state": "online", 00:15:16.586 "raid_level": "raid5f", 00:15:16.586 "superblock": false, 00:15:16.586 "num_base_bdevs": 4, 00:15:16.586 "num_base_bdevs_discovered": 4, 00:15:16.586 "num_base_bdevs_operational": 4, 00:15:16.586 "process": { 00:15:16.586 "type": "rebuild", 00:15:16.586 "target": "spare", 00:15:16.586 "progress": { 00:15:16.586 "blocks": 44160, 00:15:16.586 "percent": 22 00:15:16.586 } 00:15:16.586 }, 00:15:16.586 "base_bdevs_list": [ 00:15:16.586 { 00:15:16.586 "name": "spare", 00:15:16.586 "uuid": "698b5c1d-c47f-51be-aec4-8384372079f9", 00:15:16.586 "is_configured": true, 00:15:16.586 "data_offset": 0, 00:15:16.586 "data_size": 65536 00:15:16.586 }, 00:15:16.586 { 00:15:16.586 "name": "BaseBdev2", 00:15:16.586 "uuid": "ce7a4151-103c-51f0-b99e-97c1db9cbed3", 00:15:16.586 "is_configured": true, 00:15:16.586 "data_offset": 0, 00:15:16.586 "data_size": 65536 00:15:16.586 }, 00:15:16.586 { 00:15:16.586 "name": "BaseBdev3", 00:15:16.586 "uuid": "5ad43904-1316-5c05-80d0-e27105314f2c", 00:15:16.586 "is_configured": true, 00:15:16.586 "data_offset": 0, 00:15:16.586 "data_size": 65536 00:15:16.586 }, 00:15:16.586 { 00:15:16.586 "name": "BaseBdev4", 00:15:16.586 "uuid": "305d56c5-be7d-520d-aa44-7ca0b0ebc3e8", 00:15:16.586 "is_configured": true, 00:15:16.586 "data_offset": 0, 00:15:16.586 "data_size": 65536 00:15:16.586 } 00:15:16.586 ] 00:15:16.586 }' 00:15:16.586 04:31:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.586 04:31:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:16.586 04:31:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.586 04:31:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:16.586 04:31:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:17.526 04:31:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:17.526 04:31:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:17.526 04:31:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.526 04:31:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:17.526 04:31:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:17.526 04:31:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.526 04:31:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.526 04:31:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.526 04:31:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.526 04:31:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.526 04:31:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.526 04:31:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.526 "name": "raid_bdev1", 00:15:17.526 "uuid": "cc3b5234-1e85-4a11-a49e-d6063d345944", 00:15:17.526 "strip_size_kb": 64, 00:15:17.526 "state": "online", 00:15:17.526 "raid_level": "raid5f", 00:15:17.526 "superblock": false, 00:15:17.526 "num_base_bdevs": 4, 00:15:17.526 "num_base_bdevs_discovered": 4, 00:15:17.526 "num_base_bdevs_operational": 4, 00:15:17.526 "process": { 00:15:17.526 "type": "rebuild", 00:15:17.526 "target": "spare", 00:15:17.526 "progress": { 00:15:17.526 "blocks": 65280, 00:15:17.526 "percent": 33 00:15:17.526 } 00:15:17.526 }, 00:15:17.526 "base_bdevs_list": [ 00:15:17.526 { 00:15:17.526 "name": "spare", 00:15:17.526 "uuid": "698b5c1d-c47f-51be-aec4-8384372079f9", 00:15:17.526 "is_configured": true, 00:15:17.526 "data_offset": 0, 00:15:17.526 "data_size": 65536 00:15:17.526 }, 00:15:17.526 { 00:15:17.526 "name": "BaseBdev2", 00:15:17.526 "uuid": "ce7a4151-103c-51f0-b99e-97c1db9cbed3", 00:15:17.526 "is_configured": true, 00:15:17.526 "data_offset": 0, 00:15:17.526 "data_size": 65536 00:15:17.526 }, 00:15:17.526 { 00:15:17.526 "name": "BaseBdev3", 00:15:17.526 "uuid": "5ad43904-1316-5c05-80d0-e27105314f2c", 00:15:17.526 "is_configured": true, 00:15:17.526 "data_offset": 0, 00:15:17.526 "data_size": 65536 00:15:17.526 }, 00:15:17.526 { 00:15:17.526 "name": "BaseBdev4", 00:15:17.526 "uuid": "305d56c5-be7d-520d-aa44-7ca0b0ebc3e8", 00:15:17.526 "is_configured": true, 00:15:17.526 "data_offset": 0, 00:15:17.526 "data_size": 65536 00:15:17.526 } 00:15:17.526 ] 00:15:17.526 }' 00:15:17.526 04:31:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.526 04:31:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:17.526 04:31:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.526 04:31:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:17.526 04:31:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:18.913 04:31:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:18.913 04:31:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:18.913 04:31:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:18.913 04:31:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:18.914 04:31:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:18.914 04:31:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:18.914 04:31:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.914 04:31:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.914 04:31:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.914 04:31:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.914 04:31:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.914 04:31:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:18.914 "name": "raid_bdev1", 00:15:18.914 "uuid": "cc3b5234-1e85-4a11-a49e-d6063d345944", 00:15:18.914 "strip_size_kb": 64, 00:15:18.914 "state": "online", 00:15:18.914 "raid_level": "raid5f", 00:15:18.914 "superblock": false, 00:15:18.914 "num_base_bdevs": 4, 00:15:18.914 "num_base_bdevs_discovered": 4, 00:15:18.914 "num_base_bdevs_operational": 4, 00:15:18.914 "process": { 00:15:18.914 "type": "rebuild", 00:15:18.914 "target": "spare", 00:15:18.914 "progress": { 00:15:18.914 "blocks": 86400, 00:15:18.914 "percent": 43 00:15:18.914 } 00:15:18.914 }, 00:15:18.914 "base_bdevs_list": [ 00:15:18.914 { 00:15:18.914 "name": "spare", 00:15:18.914 "uuid": "698b5c1d-c47f-51be-aec4-8384372079f9", 00:15:18.914 "is_configured": true, 00:15:18.914 "data_offset": 0, 00:15:18.914 "data_size": 65536 00:15:18.914 }, 00:15:18.914 { 00:15:18.914 "name": "BaseBdev2", 00:15:18.914 "uuid": "ce7a4151-103c-51f0-b99e-97c1db9cbed3", 00:15:18.914 "is_configured": true, 00:15:18.914 "data_offset": 0, 00:15:18.914 "data_size": 65536 00:15:18.914 }, 00:15:18.914 { 00:15:18.914 "name": "BaseBdev3", 00:15:18.914 "uuid": "5ad43904-1316-5c05-80d0-e27105314f2c", 00:15:18.914 "is_configured": true, 00:15:18.914 "data_offset": 0, 00:15:18.914 "data_size": 65536 00:15:18.914 }, 00:15:18.914 { 00:15:18.914 "name": "BaseBdev4", 00:15:18.914 "uuid": "305d56c5-be7d-520d-aa44-7ca0b0ebc3e8", 00:15:18.914 "is_configured": true, 00:15:18.914 "data_offset": 0, 00:15:18.914 "data_size": 65536 00:15:18.914 } 00:15:18.914 ] 00:15:18.914 }' 00:15:18.914 04:31:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:18.914 04:31:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:18.914 04:31:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:18.914 04:31:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:18.914 04:31:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:19.852 04:31:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:19.852 04:31:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:19.852 04:31:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:19.852 04:31:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:19.852 04:31:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:19.852 04:31:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:19.852 04:31:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.852 04:31:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.852 04:31:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.852 04:31:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.852 04:31:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.852 04:31:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:19.852 "name": "raid_bdev1", 00:15:19.852 "uuid": "cc3b5234-1e85-4a11-a49e-d6063d345944", 00:15:19.852 "strip_size_kb": 64, 00:15:19.852 "state": "online", 00:15:19.852 "raid_level": "raid5f", 00:15:19.852 "superblock": false, 00:15:19.852 "num_base_bdevs": 4, 00:15:19.852 "num_base_bdevs_discovered": 4, 00:15:19.852 "num_base_bdevs_operational": 4, 00:15:19.852 "process": { 00:15:19.852 "type": "rebuild", 00:15:19.852 "target": "spare", 00:15:19.852 "progress": { 00:15:19.852 "blocks": 109440, 00:15:19.852 "percent": 55 00:15:19.852 } 00:15:19.852 }, 00:15:19.852 "base_bdevs_list": [ 00:15:19.852 { 00:15:19.852 "name": "spare", 00:15:19.852 "uuid": "698b5c1d-c47f-51be-aec4-8384372079f9", 00:15:19.853 "is_configured": true, 00:15:19.853 "data_offset": 0, 00:15:19.853 "data_size": 65536 00:15:19.853 }, 00:15:19.853 { 00:15:19.853 "name": "BaseBdev2", 00:15:19.853 "uuid": "ce7a4151-103c-51f0-b99e-97c1db9cbed3", 00:15:19.853 "is_configured": true, 00:15:19.853 "data_offset": 0, 00:15:19.853 "data_size": 65536 00:15:19.853 }, 00:15:19.853 { 00:15:19.853 "name": "BaseBdev3", 00:15:19.853 "uuid": "5ad43904-1316-5c05-80d0-e27105314f2c", 00:15:19.853 "is_configured": true, 00:15:19.853 "data_offset": 0, 00:15:19.853 "data_size": 65536 00:15:19.853 }, 00:15:19.853 { 00:15:19.853 "name": "BaseBdev4", 00:15:19.853 "uuid": "305d56c5-be7d-520d-aa44-7ca0b0ebc3e8", 00:15:19.853 "is_configured": true, 00:15:19.853 "data_offset": 0, 00:15:19.853 "data_size": 65536 00:15:19.853 } 00:15:19.853 ] 00:15:19.853 }' 00:15:19.853 04:31:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:19.853 04:31:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:19.853 04:31:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:19.853 04:31:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:19.853 04:31:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:21.234 04:31:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:21.234 04:31:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:21.234 04:31:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.234 04:31:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:21.234 04:31:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:21.234 04:31:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.234 04:31:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.235 04:31:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.235 04:31:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.235 04:31:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.235 04:31:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.235 04:31:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:21.235 "name": "raid_bdev1", 00:15:21.235 "uuid": "cc3b5234-1e85-4a11-a49e-d6063d345944", 00:15:21.235 "strip_size_kb": 64, 00:15:21.235 "state": "online", 00:15:21.235 "raid_level": "raid5f", 00:15:21.235 "superblock": false, 00:15:21.235 "num_base_bdevs": 4, 00:15:21.235 "num_base_bdevs_discovered": 4, 00:15:21.235 "num_base_bdevs_operational": 4, 00:15:21.235 "process": { 00:15:21.235 "type": "rebuild", 00:15:21.235 "target": "spare", 00:15:21.235 "progress": { 00:15:21.235 "blocks": 132480, 00:15:21.235 "percent": 67 00:15:21.235 } 00:15:21.235 }, 00:15:21.235 "base_bdevs_list": [ 00:15:21.235 { 00:15:21.235 "name": "spare", 00:15:21.235 "uuid": "698b5c1d-c47f-51be-aec4-8384372079f9", 00:15:21.235 "is_configured": true, 00:15:21.235 "data_offset": 0, 00:15:21.235 "data_size": 65536 00:15:21.235 }, 00:15:21.235 { 00:15:21.235 "name": "BaseBdev2", 00:15:21.235 "uuid": "ce7a4151-103c-51f0-b99e-97c1db9cbed3", 00:15:21.235 "is_configured": true, 00:15:21.235 "data_offset": 0, 00:15:21.235 "data_size": 65536 00:15:21.235 }, 00:15:21.235 { 00:15:21.235 "name": "BaseBdev3", 00:15:21.235 "uuid": "5ad43904-1316-5c05-80d0-e27105314f2c", 00:15:21.235 "is_configured": true, 00:15:21.235 "data_offset": 0, 00:15:21.235 "data_size": 65536 00:15:21.235 }, 00:15:21.235 { 00:15:21.235 "name": "BaseBdev4", 00:15:21.235 "uuid": "305d56c5-be7d-520d-aa44-7ca0b0ebc3e8", 00:15:21.235 "is_configured": true, 00:15:21.235 "data_offset": 0, 00:15:21.235 "data_size": 65536 00:15:21.235 } 00:15:21.235 ] 00:15:21.235 }' 00:15:21.235 04:31:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.235 04:31:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:21.235 04:31:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.235 04:31:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:21.235 04:31:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:22.175 04:31:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:22.175 04:31:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:22.175 04:31:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:22.175 04:31:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:22.175 04:31:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:22.175 04:31:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:22.175 04:31:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.175 04:31:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.175 04:31:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.175 04:31:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.175 04:31:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.175 04:31:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:22.175 "name": "raid_bdev1", 00:15:22.175 "uuid": "cc3b5234-1e85-4a11-a49e-d6063d345944", 00:15:22.175 "strip_size_kb": 64, 00:15:22.175 "state": "online", 00:15:22.175 "raid_level": "raid5f", 00:15:22.175 "superblock": false, 00:15:22.175 "num_base_bdevs": 4, 00:15:22.175 "num_base_bdevs_discovered": 4, 00:15:22.175 "num_base_bdevs_operational": 4, 00:15:22.175 "process": { 00:15:22.175 "type": "rebuild", 00:15:22.175 "target": "spare", 00:15:22.175 "progress": { 00:15:22.175 "blocks": 153600, 00:15:22.175 "percent": 78 00:15:22.175 } 00:15:22.175 }, 00:15:22.175 "base_bdevs_list": [ 00:15:22.175 { 00:15:22.175 "name": "spare", 00:15:22.175 "uuid": "698b5c1d-c47f-51be-aec4-8384372079f9", 00:15:22.175 "is_configured": true, 00:15:22.175 "data_offset": 0, 00:15:22.175 "data_size": 65536 00:15:22.175 }, 00:15:22.175 { 00:15:22.175 "name": "BaseBdev2", 00:15:22.175 "uuid": "ce7a4151-103c-51f0-b99e-97c1db9cbed3", 00:15:22.175 "is_configured": true, 00:15:22.175 "data_offset": 0, 00:15:22.175 "data_size": 65536 00:15:22.175 }, 00:15:22.175 { 00:15:22.175 "name": "BaseBdev3", 00:15:22.175 "uuid": "5ad43904-1316-5c05-80d0-e27105314f2c", 00:15:22.175 "is_configured": true, 00:15:22.175 "data_offset": 0, 00:15:22.175 "data_size": 65536 00:15:22.175 }, 00:15:22.175 { 00:15:22.175 "name": "BaseBdev4", 00:15:22.175 "uuid": "305d56c5-be7d-520d-aa44-7ca0b0ebc3e8", 00:15:22.175 "is_configured": true, 00:15:22.175 "data_offset": 0, 00:15:22.175 "data_size": 65536 00:15:22.175 } 00:15:22.175 ] 00:15:22.175 }' 00:15:22.175 04:31:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:22.175 04:31:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:22.175 04:31:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:22.175 04:31:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:22.175 04:31:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:23.116 04:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:23.116 04:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:23.116 04:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:23.116 04:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:23.116 04:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:23.116 04:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:23.116 04:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.116 04:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.116 04:31:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.116 04:31:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.116 04:31:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.116 04:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:23.116 "name": "raid_bdev1", 00:15:23.116 "uuid": "cc3b5234-1e85-4a11-a49e-d6063d345944", 00:15:23.116 "strip_size_kb": 64, 00:15:23.116 "state": "online", 00:15:23.116 "raid_level": "raid5f", 00:15:23.116 "superblock": false, 00:15:23.116 "num_base_bdevs": 4, 00:15:23.116 "num_base_bdevs_discovered": 4, 00:15:23.116 "num_base_bdevs_operational": 4, 00:15:23.116 "process": { 00:15:23.116 "type": "rebuild", 00:15:23.116 "target": "spare", 00:15:23.116 "progress": { 00:15:23.116 "blocks": 174720, 00:15:23.116 "percent": 88 00:15:23.116 } 00:15:23.116 }, 00:15:23.116 "base_bdevs_list": [ 00:15:23.116 { 00:15:23.116 "name": "spare", 00:15:23.116 "uuid": "698b5c1d-c47f-51be-aec4-8384372079f9", 00:15:23.116 "is_configured": true, 00:15:23.116 "data_offset": 0, 00:15:23.116 "data_size": 65536 00:15:23.116 }, 00:15:23.116 { 00:15:23.116 "name": "BaseBdev2", 00:15:23.116 "uuid": "ce7a4151-103c-51f0-b99e-97c1db9cbed3", 00:15:23.116 "is_configured": true, 00:15:23.116 "data_offset": 0, 00:15:23.116 "data_size": 65536 00:15:23.116 }, 00:15:23.116 { 00:15:23.116 "name": "BaseBdev3", 00:15:23.116 "uuid": "5ad43904-1316-5c05-80d0-e27105314f2c", 00:15:23.116 "is_configured": true, 00:15:23.116 "data_offset": 0, 00:15:23.116 "data_size": 65536 00:15:23.116 }, 00:15:23.116 { 00:15:23.116 "name": "BaseBdev4", 00:15:23.116 "uuid": "305d56c5-be7d-520d-aa44-7ca0b0ebc3e8", 00:15:23.116 "is_configured": true, 00:15:23.116 "data_offset": 0, 00:15:23.116 "data_size": 65536 00:15:23.116 } 00:15:23.116 ] 00:15:23.116 }' 00:15:23.376 04:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:23.376 04:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:23.376 04:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:23.376 04:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:23.376 04:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:24.316 04:31:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:24.316 04:31:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:24.316 04:31:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:24.316 04:31:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:24.316 04:31:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:24.316 04:31:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:24.316 04:31:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.316 04:31:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.316 04:31:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.316 04:31:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.316 04:31:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.316 [2024-12-13 04:31:24.256770] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:24.316 [2024-12-13 04:31:24.256838] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:24.316 [2024-12-13 04:31:24.256878] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:24.316 04:31:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:24.316 "name": "raid_bdev1", 00:15:24.316 "uuid": "cc3b5234-1e85-4a11-a49e-d6063d345944", 00:15:24.316 "strip_size_kb": 64, 00:15:24.316 "state": "online", 00:15:24.316 "raid_level": "raid5f", 00:15:24.316 "superblock": false, 00:15:24.316 "num_base_bdevs": 4, 00:15:24.316 "num_base_bdevs_discovered": 4, 00:15:24.316 "num_base_bdevs_operational": 4, 00:15:24.316 "process": { 00:15:24.316 "type": "rebuild", 00:15:24.316 "target": "spare", 00:15:24.316 "progress": { 00:15:24.316 "blocks": 195840, 00:15:24.316 "percent": 99 00:15:24.316 } 00:15:24.316 }, 00:15:24.316 "base_bdevs_list": [ 00:15:24.316 { 00:15:24.316 "name": "spare", 00:15:24.316 "uuid": "698b5c1d-c47f-51be-aec4-8384372079f9", 00:15:24.316 "is_configured": true, 00:15:24.316 "data_offset": 0, 00:15:24.316 "data_size": 65536 00:15:24.316 }, 00:15:24.316 { 00:15:24.316 "name": "BaseBdev2", 00:15:24.316 "uuid": "ce7a4151-103c-51f0-b99e-97c1db9cbed3", 00:15:24.316 "is_configured": true, 00:15:24.316 "data_offset": 0, 00:15:24.316 "data_size": 65536 00:15:24.316 }, 00:15:24.316 { 00:15:24.316 "name": "BaseBdev3", 00:15:24.316 "uuid": "5ad43904-1316-5c05-80d0-e27105314f2c", 00:15:24.316 "is_configured": true, 00:15:24.316 "data_offset": 0, 00:15:24.316 "data_size": 65536 00:15:24.316 }, 00:15:24.316 { 00:15:24.316 "name": "BaseBdev4", 00:15:24.316 "uuid": "305d56c5-be7d-520d-aa44-7ca0b0ebc3e8", 00:15:24.316 "is_configured": true, 00:15:24.316 "data_offset": 0, 00:15:24.316 "data_size": 65536 00:15:24.316 } 00:15:24.316 ] 00:15:24.316 }' 00:15:24.316 04:31:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:24.316 04:31:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:24.316 04:31:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.576 04:31:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:24.576 04:31:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:25.516 04:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:25.516 04:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:25.516 04:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.516 04:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:25.516 04:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:25.516 04:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.516 04:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.516 04:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.516 04:31:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.516 04:31:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.516 04:31:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.516 04:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:25.516 "name": "raid_bdev1", 00:15:25.516 "uuid": "cc3b5234-1e85-4a11-a49e-d6063d345944", 00:15:25.516 "strip_size_kb": 64, 00:15:25.516 "state": "online", 00:15:25.516 "raid_level": "raid5f", 00:15:25.516 "superblock": false, 00:15:25.516 "num_base_bdevs": 4, 00:15:25.516 "num_base_bdevs_discovered": 4, 00:15:25.516 "num_base_bdevs_operational": 4, 00:15:25.516 "base_bdevs_list": [ 00:15:25.516 { 00:15:25.516 "name": "spare", 00:15:25.516 "uuid": "698b5c1d-c47f-51be-aec4-8384372079f9", 00:15:25.516 "is_configured": true, 00:15:25.516 "data_offset": 0, 00:15:25.516 "data_size": 65536 00:15:25.516 }, 00:15:25.516 { 00:15:25.516 "name": "BaseBdev2", 00:15:25.516 "uuid": "ce7a4151-103c-51f0-b99e-97c1db9cbed3", 00:15:25.516 "is_configured": true, 00:15:25.516 "data_offset": 0, 00:15:25.516 "data_size": 65536 00:15:25.516 }, 00:15:25.516 { 00:15:25.516 "name": "BaseBdev3", 00:15:25.516 "uuid": "5ad43904-1316-5c05-80d0-e27105314f2c", 00:15:25.516 "is_configured": true, 00:15:25.516 "data_offset": 0, 00:15:25.516 "data_size": 65536 00:15:25.516 }, 00:15:25.516 { 00:15:25.516 "name": "BaseBdev4", 00:15:25.516 "uuid": "305d56c5-be7d-520d-aa44-7ca0b0ebc3e8", 00:15:25.516 "is_configured": true, 00:15:25.516 "data_offset": 0, 00:15:25.516 "data_size": 65536 00:15:25.516 } 00:15:25.516 ] 00:15:25.516 }' 00:15:25.516 04:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.516 04:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:25.517 04:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:25.517 04:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:25.517 04:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:25.517 04:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:25.517 04:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.517 04:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:25.517 04:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:25.517 04:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.517 04:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.517 04:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.517 04:31:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.517 04:31:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.517 04:31:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.517 04:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:25.517 "name": "raid_bdev1", 00:15:25.517 "uuid": "cc3b5234-1e85-4a11-a49e-d6063d345944", 00:15:25.517 "strip_size_kb": 64, 00:15:25.517 "state": "online", 00:15:25.517 "raid_level": "raid5f", 00:15:25.517 "superblock": false, 00:15:25.517 "num_base_bdevs": 4, 00:15:25.517 "num_base_bdevs_discovered": 4, 00:15:25.517 "num_base_bdevs_operational": 4, 00:15:25.517 "base_bdevs_list": [ 00:15:25.517 { 00:15:25.517 "name": "spare", 00:15:25.517 "uuid": "698b5c1d-c47f-51be-aec4-8384372079f9", 00:15:25.517 "is_configured": true, 00:15:25.517 "data_offset": 0, 00:15:25.517 "data_size": 65536 00:15:25.517 }, 00:15:25.517 { 00:15:25.517 "name": "BaseBdev2", 00:15:25.517 "uuid": "ce7a4151-103c-51f0-b99e-97c1db9cbed3", 00:15:25.517 "is_configured": true, 00:15:25.517 "data_offset": 0, 00:15:25.517 "data_size": 65536 00:15:25.517 }, 00:15:25.517 { 00:15:25.517 "name": "BaseBdev3", 00:15:25.517 "uuid": "5ad43904-1316-5c05-80d0-e27105314f2c", 00:15:25.517 "is_configured": true, 00:15:25.517 "data_offset": 0, 00:15:25.517 "data_size": 65536 00:15:25.517 }, 00:15:25.517 { 00:15:25.517 "name": "BaseBdev4", 00:15:25.517 "uuid": "305d56c5-be7d-520d-aa44-7ca0b0ebc3e8", 00:15:25.517 "is_configured": true, 00:15:25.517 "data_offset": 0, 00:15:25.517 "data_size": 65536 00:15:25.517 } 00:15:25.517 ] 00:15:25.517 }' 00:15:25.517 04:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.777 04:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:25.777 04:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:25.777 04:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:25.777 04:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:25.777 04:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:25.777 04:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:25.777 04:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.777 04:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.777 04:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:25.777 04:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.777 04:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.777 04:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.777 04:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.777 04:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.777 04:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.777 04:31:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.777 04:31:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.777 04:31:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.777 04:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.777 "name": "raid_bdev1", 00:15:25.777 "uuid": "cc3b5234-1e85-4a11-a49e-d6063d345944", 00:15:25.777 "strip_size_kb": 64, 00:15:25.777 "state": "online", 00:15:25.777 "raid_level": "raid5f", 00:15:25.777 "superblock": false, 00:15:25.777 "num_base_bdevs": 4, 00:15:25.777 "num_base_bdevs_discovered": 4, 00:15:25.777 "num_base_bdevs_operational": 4, 00:15:25.777 "base_bdevs_list": [ 00:15:25.777 { 00:15:25.777 "name": "spare", 00:15:25.777 "uuid": "698b5c1d-c47f-51be-aec4-8384372079f9", 00:15:25.777 "is_configured": true, 00:15:25.777 "data_offset": 0, 00:15:25.777 "data_size": 65536 00:15:25.777 }, 00:15:25.777 { 00:15:25.777 "name": "BaseBdev2", 00:15:25.777 "uuid": "ce7a4151-103c-51f0-b99e-97c1db9cbed3", 00:15:25.777 "is_configured": true, 00:15:25.777 "data_offset": 0, 00:15:25.777 "data_size": 65536 00:15:25.777 }, 00:15:25.777 { 00:15:25.777 "name": "BaseBdev3", 00:15:25.777 "uuid": "5ad43904-1316-5c05-80d0-e27105314f2c", 00:15:25.777 "is_configured": true, 00:15:25.777 "data_offset": 0, 00:15:25.777 "data_size": 65536 00:15:25.777 }, 00:15:25.777 { 00:15:25.777 "name": "BaseBdev4", 00:15:25.777 "uuid": "305d56c5-be7d-520d-aa44-7ca0b0ebc3e8", 00:15:25.777 "is_configured": true, 00:15:25.777 "data_offset": 0, 00:15:25.777 "data_size": 65536 00:15:25.777 } 00:15:25.777 ] 00:15:25.777 }' 00:15:25.777 04:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.777 04:31:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.347 04:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:26.347 04:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.347 04:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.347 [2024-12-13 04:31:26.116539] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:26.347 [2024-12-13 04:31:26.116638] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:26.347 [2024-12-13 04:31:26.116754] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:26.347 [2024-12-13 04:31:26.116873] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:26.347 [2024-12-13 04:31:26.116888] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:15:26.347 04:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.347 04:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.347 04:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:26.347 04:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.347 04:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.347 04:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.347 04:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:26.347 04:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:26.347 04:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:26.347 04:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:26.347 04:31:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:26.347 04:31:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:26.347 04:31:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:26.347 04:31:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:26.347 04:31:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:26.347 04:31:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:26.347 04:31:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:26.347 04:31:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:26.347 04:31:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:26.607 /dev/nbd0 00:15:26.607 04:31:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:26.607 04:31:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:26.607 04:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:26.607 04:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:26.607 04:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:26.608 04:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:26.608 04:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:26.608 04:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:26.608 04:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:26.608 04:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:26.608 04:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:26.608 1+0 records in 00:15:26.608 1+0 records out 00:15:26.608 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002928 s, 14.0 MB/s 00:15:26.608 04:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:26.608 04:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:26.608 04:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:26.608 04:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:26.608 04:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:26.608 04:31:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:26.608 04:31:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:26.608 04:31:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:26.608 /dev/nbd1 00:15:26.868 04:31:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:26.868 04:31:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:26.868 04:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:26.868 04:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:26.868 04:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:26.868 04:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:26.868 04:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:26.868 04:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:26.868 04:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:26.868 04:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:26.868 04:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:26.868 1+0 records in 00:15:26.868 1+0 records out 00:15:26.868 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000527826 s, 7.8 MB/s 00:15:26.868 04:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:26.868 04:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:26.868 04:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:26.868 04:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:26.868 04:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:26.868 04:31:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:26.868 04:31:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:26.868 04:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:26.868 04:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:26.868 04:31:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:26.868 04:31:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:26.868 04:31:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:26.868 04:31:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:26.868 04:31:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:26.868 04:31:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:27.128 04:31:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:27.128 04:31:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:27.128 04:31:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:27.128 04:31:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:27.128 04:31:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:27.128 04:31:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:27.128 04:31:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:27.128 04:31:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:27.128 04:31:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:27.128 04:31:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:27.388 04:31:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:27.388 04:31:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:27.388 04:31:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:27.388 04:31:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:27.388 04:31:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:27.388 04:31:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:27.388 04:31:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:27.388 04:31:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:27.388 04:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:27.388 04:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 96779 00:15:27.388 04:31:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 96779 ']' 00:15:27.388 04:31:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 96779 00:15:27.388 04:31:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:15:27.388 04:31:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:27.388 04:31:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96779 00:15:27.388 04:31:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:27.388 killing process with pid 96779 00:15:27.388 Received shutdown signal, test time was about 60.000000 seconds 00:15:27.388 00:15:27.388 Latency(us) 00:15:27.388 [2024-12-13T04:31:27.403Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.388 [2024-12-13T04:31:27.403Z] =================================================================================================================== 00:15:27.388 [2024-12-13T04:31:27.403Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:27.388 04:31:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:27.388 04:31:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96779' 00:15:27.388 04:31:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 96779 00:15:27.388 [2024-12-13 04:31:27.223531] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:27.388 04:31:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 96779 00:15:27.388 [2024-12-13 04:31:27.317148] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:27.648 04:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:27.648 00:15:27.648 real 0m18.865s 00:15:27.648 user 0m22.638s 00:15:27.648 sys 0m2.600s 00:15:27.648 04:31:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:27.648 04:31:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.648 ************************************ 00:15:27.648 END TEST raid5f_rebuild_test 00:15:27.648 ************************************ 00:15:27.909 04:31:27 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:15:27.909 04:31:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:27.909 04:31:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:27.909 04:31:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:27.909 ************************************ 00:15:27.909 START TEST raid5f_rebuild_test_sb 00:15:27.909 ************************************ 00:15:27.909 04:31:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:15:27.909 04:31:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:27.909 04:31:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:27.910 04:31:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:27.910 04:31:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:27.910 04:31:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:27.910 04:31:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:27.910 04:31:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:27.910 04:31:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:27.910 04:31:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:27.910 04:31:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:27.910 04:31:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:27.910 04:31:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:27.910 04:31:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:27.910 04:31:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:27.910 04:31:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:27.910 04:31:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:27.910 04:31:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:27.910 04:31:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:27.910 04:31:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:27.910 04:31:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:27.910 04:31:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:27.910 04:31:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:27.910 04:31:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:27.910 04:31:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:27.910 04:31:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:27.910 04:31:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:27.910 04:31:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:27.910 04:31:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:27.910 04:31:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:27.910 04:31:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:27.910 04:31:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:27.910 04:31:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:27.910 04:31:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=97284 00:15:27.910 04:31:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:27.910 04:31:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 97284 00:15:27.910 04:31:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 97284 ']' 00:15:27.910 04:31:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:27.910 04:31:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:27.910 04:31:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:27.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:27.910 04:31:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:27.910 04:31:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.910 [2024-12-13 04:31:27.813336] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:27.910 [2024-12-13 04:31:27.813538] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:15:27.910 Zero copy mechanism will not be used. 00:15:27.910 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97284 ] 00:15:28.170 [2024-12-13 04:31:27.963119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.170 [2024-12-13 04:31:28.002007] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.170 [2024-12-13 04:31:28.079028] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:28.170 [2024-12-13 04:31:28.079065] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:28.741 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:28.741 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:28.741 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:28.741 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:28.741 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.741 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.741 BaseBdev1_malloc 00:15:28.741 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.741 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:28.741 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.741 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.741 [2024-12-13 04:31:28.652856] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:28.741 [2024-12-13 04:31:28.652924] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:28.741 [2024-12-13 04:31:28.652953] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:28.741 [2024-12-13 04:31:28.652966] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:28.741 [2024-12-13 04:31:28.655356] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:28.741 [2024-12-13 04:31:28.655494] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:28.741 BaseBdev1 00:15:28.741 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.741 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:28.741 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:28.741 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.741 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.741 BaseBdev2_malloc 00:15:28.741 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.741 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:28.741 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.741 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.741 [2024-12-13 04:31:28.683262] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:28.741 [2024-12-13 04:31:28.683316] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:28.741 [2024-12-13 04:31:28.683341] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:28.741 [2024-12-13 04:31:28.683349] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:28.741 [2024-12-13 04:31:28.685743] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:28.741 [2024-12-13 04:31:28.685860] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:28.741 BaseBdev2 00:15:28.741 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.741 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:28.741 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:28.741 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.741 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.741 BaseBdev3_malloc 00:15:28.741 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.741 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:28.741 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.741 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.741 [2024-12-13 04:31:28.717657] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:28.741 [2024-12-13 04:31:28.717711] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:28.741 [2024-12-13 04:31:28.717740] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:28.741 [2024-12-13 04:31:28.717749] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:28.741 [2024-12-13 04:31:28.720137] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:28.741 [2024-12-13 04:31:28.720169] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:28.741 BaseBdev3 00:15:28.741 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.741 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:28.741 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:28.741 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.741 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.002 BaseBdev4_malloc 00:15:29.002 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.002 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:29.002 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.002 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.002 [2024-12-13 04:31:28.762371] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:29.002 [2024-12-13 04:31:28.762419] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:29.002 [2024-12-13 04:31:28.762459] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:29.002 [2024-12-13 04:31:28.762469] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:29.002 [2024-12-13 04:31:28.764938] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:29.002 [2024-12-13 04:31:28.764972] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:29.002 BaseBdev4 00:15:29.002 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.002 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:29.002 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.002 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.002 spare_malloc 00:15:29.002 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.002 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:29.002 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.002 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.002 spare_delay 00:15:29.002 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.002 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:29.002 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.002 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.002 [2024-12-13 04:31:28.808940] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:29.002 [2024-12-13 04:31:28.809069] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:29.002 [2024-12-13 04:31:28.809094] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:29.002 [2024-12-13 04:31:28.809103] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:29.002 [2024-12-13 04:31:28.811566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:29.002 [2024-12-13 04:31:28.811597] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:29.002 spare 00:15:29.002 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.002 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:29.002 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.002 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.002 [2024-12-13 04:31:28.821006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:29.002 [2024-12-13 04:31:28.823136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:29.002 [2024-12-13 04:31:28.823210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:29.002 [2024-12-13 04:31:28.823268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:29.002 [2024-12-13 04:31:28.823438] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:15:29.002 [2024-12-13 04:31:28.823459] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:29.002 [2024-12-13 04:31:28.823687] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:15:29.002 [2024-12-13 04:31:28.824156] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:15:29.002 [2024-12-13 04:31:28.824169] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:15:29.002 [2024-12-13 04:31:28.824280] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:29.002 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.002 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:29.002 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:29.002 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:29.002 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:29.002 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:29.002 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:29.002 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.002 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.002 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.002 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.002 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.002 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.002 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.002 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.002 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.002 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.002 "name": "raid_bdev1", 00:15:29.002 "uuid": "6be3f265-7ed0-4617-ac4b-2d08163f3e06", 00:15:29.002 "strip_size_kb": 64, 00:15:29.002 "state": "online", 00:15:29.002 "raid_level": "raid5f", 00:15:29.002 "superblock": true, 00:15:29.002 "num_base_bdevs": 4, 00:15:29.002 "num_base_bdevs_discovered": 4, 00:15:29.002 "num_base_bdevs_operational": 4, 00:15:29.002 "base_bdevs_list": [ 00:15:29.002 { 00:15:29.002 "name": "BaseBdev1", 00:15:29.002 "uuid": "20a4e92a-b485-5652-a232-6166450c46f0", 00:15:29.002 "is_configured": true, 00:15:29.002 "data_offset": 2048, 00:15:29.002 "data_size": 63488 00:15:29.002 }, 00:15:29.002 { 00:15:29.002 "name": "BaseBdev2", 00:15:29.002 "uuid": "442118ef-f36a-59cf-82c5-6ed2e58596ed", 00:15:29.002 "is_configured": true, 00:15:29.002 "data_offset": 2048, 00:15:29.002 "data_size": 63488 00:15:29.002 }, 00:15:29.002 { 00:15:29.002 "name": "BaseBdev3", 00:15:29.002 "uuid": "a25d8167-7005-5385-99be-4baebfae0640", 00:15:29.002 "is_configured": true, 00:15:29.002 "data_offset": 2048, 00:15:29.002 "data_size": 63488 00:15:29.002 }, 00:15:29.002 { 00:15:29.002 "name": "BaseBdev4", 00:15:29.002 "uuid": "a5202ed5-68ac-53fa-a361-33ad99a0d1ec", 00:15:29.002 "is_configured": true, 00:15:29.002 "data_offset": 2048, 00:15:29.002 "data_size": 63488 00:15:29.002 } 00:15:29.002 ] 00:15:29.002 }' 00:15:29.002 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.002 04:31:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.262 04:31:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:29.262 04:31:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:29.262 04:31:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.262 04:31:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.262 [2024-12-13 04:31:29.254706] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:29.262 04:31:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.523 04:31:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:15:29.523 04:31:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.523 04:31:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:29.523 04:31:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.523 04:31:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.523 04:31:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.523 04:31:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:29.523 04:31:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:29.523 04:31:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:29.523 04:31:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:29.523 04:31:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:29.523 04:31:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:29.523 04:31:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:29.523 04:31:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:29.523 04:31:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:29.523 04:31:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:29.523 04:31:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:29.523 04:31:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:29.523 04:31:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:29.523 04:31:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:29.523 [2024-12-13 04:31:29.506107] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:15:29.523 /dev/nbd0 00:15:29.783 04:31:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:29.783 04:31:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:29.783 04:31:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:29.783 04:31:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:29.783 04:31:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:29.783 04:31:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:29.783 04:31:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:29.783 04:31:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:29.783 04:31:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:29.783 04:31:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:29.783 04:31:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:29.783 1+0 records in 00:15:29.783 1+0 records out 00:15:29.783 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000445802 s, 9.2 MB/s 00:15:29.783 04:31:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.783 04:31:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:29.783 04:31:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.783 04:31:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:29.783 04:31:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:29.783 04:31:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:29.783 04:31:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:29.783 04:31:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:29.783 04:31:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:15:29.783 04:31:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:15:29.783 04:31:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:15:30.353 496+0 records in 00:15:30.353 496+0 records out 00:15:30.353 97517568 bytes (98 MB, 93 MiB) copied, 0.601184 s, 162 MB/s 00:15:30.353 04:31:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:30.353 04:31:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:30.353 04:31:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:30.353 04:31:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:30.353 04:31:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:30.353 04:31:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:30.353 04:31:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:30.613 04:31:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:30.613 [2024-12-13 04:31:30.397671] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:30.613 04:31:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:30.613 04:31:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:30.613 04:31:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:30.613 04:31:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:30.613 04:31:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:30.613 04:31:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:30.613 04:31:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:30.613 04:31:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:30.613 04:31:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.613 04:31:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.613 [2024-12-13 04:31:30.412541] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:30.613 04:31:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.613 04:31:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:30.613 04:31:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:30.613 04:31:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:30.613 04:31:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:30.613 04:31:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.613 04:31:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:30.613 04:31:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.613 04:31:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.613 04:31:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.613 04:31:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.613 04:31:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.613 04:31:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.613 04:31:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.613 04:31:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.613 04:31:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.613 04:31:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.613 "name": "raid_bdev1", 00:15:30.613 "uuid": "6be3f265-7ed0-4617-ac4b-2d08163f3e06", 00:15:30.613 "strip_size_kb": 64, 00:15:30.613 "state": "online", 00:15:30.613 "raid_level": "raid5f", 00:15:30.613 "superblock": true, 00:15:30.613 "num_base_bdevs": 4, 00:15:30.613 "num_base_bdevs_discovered": 3, 00:15:30.613 "num_base_bdevs_operational": 3, 00:15:30.613 "base_bdevs_list": [ 00:15:30.613 { 00:15:30.613 "name": null, 00:15:30.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.613 "is_configured": false, 00:15:30.613 "data_offset": 0, 00:15:30.613 "data_size": 63488 00:15:30.613 }, 00:15:30.613 { 00:15:30.613 "name": "BaseBdev2", 00:15:30.613 "uuid": "442118ef-f36a-59cf-82c5-6ed2e58596ed", 00:15:30.613 "is_configured": true, 00:15:30.613 "data_offset": 2048, 00:15:30.613 "data_size": 63488 00:15:30.613 }, 00:15:30.613 { 00:15:30.613 "name": "BaseBdev3", 00:15:30.613 "uuid": "a25d8167-7005-5385-99be-4baebfae0640", 00:15:30.613 "is_configured": true, 00:15:30.613 "data_offset": 2048, 00:15:30.613 "data_size": 63488 00:15:30.613 }, 00:15:30.613 { 00:15:30.613 "name": "BaseBdev4", 00:15:30.613 "uuid": "a5202ed5-68ac-53fa-a361-33ad99a0d1ec", 00:15:30.613 "is_configured": true, 00:15:30.613 "data_offset": 2048, 00:15:30.613 "data_size": 63488 00:15:30.613 } 00:15:30.613 ] 00:15:30.613 }' 00:15:30.613 04:31:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.613 04:31:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.874 04:31:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:30.874 04:31:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.874 04:31:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.874 [2024-12-13 04:31:30.859716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:30.874 [2024-12-13 04:31:30.867023] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000270a0 00:15:30.874 04:31:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.874 04:31:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:30.874 [2024-12-13 04:31:30.869605] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:32.256 04:31:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:32.256 04:31:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:32.256 04:31:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:32.256 04:31:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:32.256 04:31:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:32.256 04:31:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.256 04:31:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.256 04:31:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.256 04:31:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.256 04:31:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.256 04:31:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:32.256 "name": "raid_bdev1", 00:15:32.256 "uuid": "6be3f265-7ed0-4617-ac4b-2d08163f3e06", 00:15:32.256 "strip_size_kb": 64, 00:15:32.256 "state": "online", 00:15:32.256 "raid_level": "raid5f", 00:15:32.256 "superblock": true, 00:15:32.257 "num_base_bdevs": 4, 00:15:32.257 "num_base_bdevs_discovered": 4, 00:15:32.257 "num_base_bdevs_operational": 4, 00:15:32.257 "process": { 00:15:32.257 "type": "rebuild", 00:15:32.257 "target": "spare", 00:15:32.257 "progress": { 00:15:32.257 "blocks": 19200, 00:15:32.257 "percent": 10 00:15:32.257 } 00:15:32.257 }, 00:15:32.257 "base_bdevs_list": [ 00:15:32.257 { 00:15:32.257 "name": "spare", 00:15:32.257 "uuid": "d06dec0f-be4f-5fea-875c-3004c8417999", 00:15:32.257 "is_configured": true, 00:15:32.257 "data_offset": 2048, 00:15:32.257 "data_size": 63488 00:15:32.257 }, 00:15:32.257 { 00:15:32.257 "name": "BaseBdev2", 00:15:32.257 "uuid": "442118ef-f36a-59cf-82c5-6ed2e58596ed", 00:15:32.257 "is_configured": true, 00:15:32.257 "data_offset": 2048, 00:15:32.257 "data_size": 63488 00:15:32.257 }, 00:15:32.257 { 00:15:32.257 "name": "BaseBdev3", 00:15:32.257 "uuid": "a25d8167-7005-5385-99be-4baebfae0640", 00:15:32.257 "is_configured": true, 00:15:32.257 "data_offset": 2048, 00:15:32.257 "data_size": 63488 00:15:32.257 }, 00:15:32.257 { 00:15:32.257 "name": "BaseBdev4", 00:15:32.257 "uuid": "a5202ed5-68ac-53fa-a361-33ad99a0d1ec", 00:15:32.257 "is_configured": true, 00:15:32.257 "data_offset": 2048, 00:15:32.257 "data_size": 63488 00:15:32.257 } 00:15:32.257 ] 00:15:32.257 }' 00:15:32.257 04:31:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:32.257 04:31:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:32.257 04:31:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:32.257 04:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:32.257 04:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:32.257 04:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.257 04:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.257 [2024-12-13 04:31:32.029253] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:32.257 [2024-12-13 04:31:32.076291] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:32.257 [2024-12-13 04:31:32.076354] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:32.257 [2024-12-13 04:31:32.076374] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:32.257 [2024-12-13 04:31:32.076394] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:32.257 04:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.257 04:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:32.257 04:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:32.257 04:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:32.257 04:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:32.257 04:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:32.257 04:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:32.257 04:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.257 04:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.257 04:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.257 04:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.257 04:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.257 04:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.257 04:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.257 04:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.257 04:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.257 04:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.257 "name": "raid_bdev1", 00:15:32.257 "uuid": "6be3f265-7ed0-4617-ac4b-2d08163f3e06", 00:15:32.257 "strip_size_kb": 64, 00:15:32.257 "state": "online", 00:15:32.257 "raid_level": "raid5f", 00:15:32.257 "superblock": true, 00:15:32.257 "num_base_bdevs": 4, 00:15:32.257 "num_base_bdevs_discovered": 3, 00:15:32.257 "num_base_bdevs_operational": 3, 00:15:32.257 "base_bdevs_list": [ 00:15:32.257 { 00:15:32.257 "name": null, 00:15:32.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.257 "is_configured": false, 00:15:32.257 "data_offset": 0, 00:15:32.257 "data_size": 63488 00:15:32.257 }, 00:15:32.257 { 00:15:32.257 "name": "BaseBdev2", 00:15:32.257 "uuid": "442118ef-f36a-59cf-82c5-6ed2e58596ed", 00:15:32.257 "is_configured": true, 00:15:32.257 "data_offset": 2048, 00:15:32.257 "data_size": 63488 00:15:32.257 }, 00:15:32.257 { 00:15:32.257 "name": "BaseBdev3", 00:15:32.257 "uuid": "a25d8167-7005-5385-99be-4baebfae0640", 00:15:32.257 "is_configured": true, 00:15:32.257 "data_offset": 2048, 00:15:32.257 "data_size": 63488 00:15:32.257 }, 00:15:32.257 { 00:15:32.257 "name": "BaseBdev4", 00:15:32.257 "uuid": "a5202ed5-68ac-53fa-a361-33ad99a0d1ec", 00:15:32.257 "is_configured": true, 00:15:32.257 "data_offset": 2048, 00:15:32.257 "data_size": 63488 00:15:32.257 } 00:15:32.257 ] 00:15:32.257 }' 00:15:32.257 04:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.257 04:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.827 04:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:32.827 04:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:32.827 04:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:32.827 04:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:32.827 04:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:32.827 04:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.827 04:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.827 04:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.827 04:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.827 04:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.827 04:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:32.827 "name": "raid_bdev1", 00:15:32.827 "uuid": "6be3f265-7ed0-4617-ac4b-2d08163f3e06", 00:15:32.827 "strip_size_kb": 64, 00:15:32.827 "state": "online", 00:15:32.827 "raid_level": "raid5f", 00:15:32.827 "superblock": true, 00:15:32.827 "num_base_bdevs": 4, 00:15:32.827 "num_base_bdevs_discovered": 3, 00:15:32.827 "num_base_bdevs_operational": 3, 00:15:32.827 "base_bdevs_list": [ 00:15:32.827 { 00:15:32.827 "name": null, 00:15:32.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.827 "is_configured": false, 00:15:32.827 "data_offset": 0, 00:15:32.827 "data_size": 63488 00:15:32.827 }, 00:15:32.827 { 00:15:32.827 "name": "BaseBdev2", 00:15:32.827 "uuid": "442118ef-f36a-59cf-82c5-6ed2e58596ed", 00:15:32.827 "is_configured": true, 00:15:32.827 "data_offset": 2048, 00:15:32.827 "data_size": 63488 00:15:32.827 }, 00:15:32.827 { 00:15:32.827 "name": "BaseBdev3", 00:15:32.827 "uuid": "a25d8167-7005-5385-99be-4baebfae0640", 00:15:32.827 "is_configured": true, 00:15:32.827 "data_offset": 2048, 00:15:32.827 "data_size": 63488 00:15:32.827 }, 00:15:32.827 { 00:15:32.827 "name": "BaseBdev4", 00:15:32.827 "uuid": "a5202ed5-68ac-53fa-a361-33ad99a0d1ec", 00:15:32.827 "is_configured": true, 00:15:32.827 "data_offset": 2048, 00:15:32.827 "data_size": 63488 00:15:32.827 } 00:15:32.827 ] 00:15:32.827 }' 00:15:32.827 04:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:32.827 04:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:32.827 04:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:32.827 04:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:32.827 04:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:32.827 04:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.827 04:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.827 [2024-12-13 04:31:32.700588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:32.827 [2024-12-13 04:31:32.705983] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027170 00:15:32.827 04:31:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.827 04:31:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:32.827 [2024-12-13 04:31:32.708527] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:33.766 04:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:33.766 04:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:33.766 04:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:33.766 04:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:33.766 04:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:33.766 04:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.766 04:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.766 04:31:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.767 04:31:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.767 04:31:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.767 04:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:33.767 "name": "raid_bdev1", 00:15:33.767 "uuid": "6be3f265-7ed0-4617-ac4b-2d08163f3e06", 00:15:33.767 "strip_size_kb": 64, 00:15:33.767 "state": "online", 00:15:33.767 "raid_level": "raid5f", 00:15:33.767 "superblock": true, 00:15:33.767 "num_base_bdevs": 4, 00:15:33.767 "num_base_bdevs_discovered": 4, 00:15:33.767 "num_base_bdevs_operational": 4, 00:15:33.767 "process": { 00:15:33.767 "type": "rebuild", 00:15:33.767 "target": "spare", 00:15:33.767 "progress": { 00:15:33.767 "blocks": 19200, 00:15:33.767 "percent": 10 00:15:33.767 } 00:15:33.767 }, 00:15:33.767 "base_bdevs_list": [ 00:15:33.767 { 00:15:33.767 "name": "spare", 00:15:33.767 "uuid": "d06dec0f-be4f-5fea-875c-3004c8417999", 00:15:33.767 "is_configured": true, 00:15:33.767 "data_offset": 2048, 00:15:33.767 "data_size": 63488 00:15:33.767 }, 00:15:33.767 { 00:15:33.767 "name": "BaseBdev2", 00:15:33.767 "uuid": "442118ef-f36a-59cf-82c5-6ed2e58596ed", 00:15:33.767 "is_configured": true, 00:15:33.767 "data_offset": 2048, 00:15:33.767 "data_size": 63488 00:15:33.767 }, 00:15:33.767 { 00:15:33.767 "name": "BaseBdev3", 00:15:33.767 "uuid": "a25d8167-7005-5385-99be-4baebfae0640", 00:15:33.767 "is_configured": true, 00:15:33.767 "data_offset": 2048, 00:15:33.767 "data_size": 63488 00:15:33.767 }, 00:15:33.767 { 00:15:33.767 "name": "BaseBdev4", 00:15:33.767 "uuid": "a5202ed5-68ac-53fa-a361-33ad99a0d1ec", 00:15:33.767 "is_configured": true, 00:15:33.767 "data_offset": 2048, 00:15:33.767 "data_size": 63488 00:15:33.767 } 00:15:33.767 ] 00:15:33.767 }' 00:15:33.767 04:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:34.028 04:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:34.028 04:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:34.028 04:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:34.028 04:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:34.028 04:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:34.028 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:34.028 04:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:34.028 04:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:34.028 04:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=542 00:15:34.028 04:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:34.028 04:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:34.028 04:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:34.028 04:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:34.028 04:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:34.028 04:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:34.028 04:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.028 04:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.028 04:31:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.028 04:31:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.028 04:31:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.028 04:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:34.028 "name": "raid_bdev1", 00:15:34.028 "uuid": "6be3f265-7ed0-4617-ac4b-2d08163f3e06", 00:15:34.028 "strip_size_kb": 64, 00:15:34.028 "state": "online", 00:15:34.028 "raid_level": "raid5f", 00:15:34.028 "superblock": true, 00:15:34.028 "num_base_bdevs": 4, 00:15:34.028 "num_base_bdevs_discovered": 4, 00:15:34.028 "num_base_bdevs_operational": 4, 00:15:34.028 "process": { 00:15:34.028 "type": "rebuild", 00:15:34.028 "target": "spare", 00:15:34.028 "progress": { 00:15:34.028 "blocks": 21120, 00:15:34.028 "percent": 11 00:15:34.028 } 00:15:34.028 }, 00:15:34.028 "base_bdevs_list": [ 00:15:34.028 { 00:15:34.028 "name": "spare", 00:15:34.028 "uuid": "d06dec0f-be4f-5fea-875c-3004c8417999", 00:15:34.028 "is_configured": true, 00:15:34.028 "data_offset": 2048, 00:15:34.028 "data_size": 63488 00:15:34.028 }, 00:15:34.028 { 00:15:34.028 "name": "BaseBdev2", 00:15:34.028 "uuid": "442118ef-f36a-59cf-82c5-6ed2e58596ed", 00:15:34.028 "is_configured": true, 00:15:34.028 "data_offset": 2048, 00:15:34.028 "data_size": 63488 00:15:34.028 }, 00:15:34.028 { 00:15:34.028 "name": "BaseBdev3", 00:15:34.028 "uuid": "a25d8167-7005-5385-99be-4baebfae0640", 00:15:34.028 "is_configured": true, 00:15:34.028 "data_offset": 2048, 00:15:34.028 "data_size": 63488 00:15:34.028 }, 00:15:34.028 { 00:15:34.028 "name": "BaseBdev4", 00:15:34.028 "uuid": "a5202ed5-68ac-53fa-a361-33ad99a0d1ec", 00:15:34.028 "is_configured": true, 00:15:34.028 "data_offset": 2048, 00:15:34.028 "data_size": 63488 00:15:34.028 } 00:15:34.028 ] 00:15:34.028 }' 00:15:34.028 04:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:34.028 04:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:34.028 04:31:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:34.028 04:31:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:34.028 04:31:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:35.408 04:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:35.408 04:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:35.408 04:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:35.408 04:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:35.408 04:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:35.408 04:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:35.408 04:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.408 04:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.408 04:31:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.408 04:31:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.408 04:31:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.408 04:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:35.408 "name": "raid_bdev1", 00:15:35.408 "uuid": "6be3f265-7ed0-4617-ac4b-2d08163f3e06", 00:15:35.408 "strip_size_kb": 64, 00:15:35.408 "state": "online", 00:15:35.408 "raid_level": "raid5f", 00:15:35.408 "superblock": true, 00:15:35.408 "num_base_bdevs": 4, 00:15:35.408 "num_base_bdevs_discovered": 4, 00:15:35.408 "num_base_bdevs_operational": 4, 00:15:35.408 "process": { 00:15:35.408 "type": "rebuild", 00:15:35.408 "target": "spare", 00:15:35.408 "progress": { 00:15:35.408 "blocks": 44160, 00:15:35.408 "percent": 23 00:15:35.408 } 00:15:35.408 }, 00:15:35.408 "base_bdevs_list": [ 00:15:35.408 { 00:15:35.408 "name": "spare", 00:15:35.408 "uuid": "d06dec0f-be4f-5fea-875c-3004c8417999", 00:15:35.408 "is_configured": true, 00:15:35.408 "data_offset": 2048, 00:15:35.408 "data_size": 63488 00:15:35.408 }, 00:15:35.408 { 00:15:35.408 "name": "BaseBdev2", 00:15:35.408 "uuid": "442118ef-f36a-59cf-82c5-6ed2e58596ed", 00:15:35.408 "is_configured": true, 00:15:35.408 "data_offset": 2048, 00:15:35.408 "data_size": 63488 00:15:35.408 }, 00:15:35.408 { 00:15:35.408 "name": "BaseBdev3", 00:15:35.408 "uuid": "a25d8167-7005-5385-99be-4baebfae0640", 00:15:35.408 "is_configured": true, 00:15:35.408 "data_offset": 2048, 00:15:35.408 "data_size": 63488 00:15:35.408 }, 00:15:35.408 { 00:15:35.408 "name": "BaseBdev4", 00:15:35.408 "uuid": "a5202ed5-68ac-53fa-a361-33ad99a0d1ec", 00:15:35.408 "is_configured": true, 00:15:35.408 "data_offset": 2048, 00:15:35.408 "data_size": 63488 00:15:35.408 } 00:15:35.408 ] 00:15:35.408 }' 00:15:35.408 04:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:35.408 04:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:35.408 04:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:35.408 04:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:35.408 04:31:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:36.348 04:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:36.348 04:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:36.348 04:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.348 04:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:36.348 04:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:36.348 04:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.348 04:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.348 04:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.348 04:31:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.348 04:31:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.348 04:31:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.348 04:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.348 "name": "raid_bdev1", 00:15:36.348 "uuid": "6be3f265-7ed0-4617-ac4b-2d08163f3e06", 00:15:36.348 "strip_size_kb": 64, 00:15:36.348 "state": "online", 00:15:36.348 "raid_level": "raid5f", 00:15:36.348 "superblock": true, 00:15:36.348 "num_base_bdevs": 4, 00:15:36.348 "num_base_bdevs_discovered": 4, 00:15:36.348 "num_base_bdevs_operational": 4, 00:15:36.348 "process": { 00:15:36.348 "type": "rebuild", 00:15:36.348 "target": "spare", 00:15:36.348 "progress": { 00:15:36.348 "blocks": 65280, 00:15:36.348 "percent": 34 00:15:36.348 } 00:15:36.348 }, 00:15:36.348 "base_bdevs_list": [ 00:15:36.348 { 00:15:36.348 "name": "spare", 00:15:36.348 "uuid": "d06dec0f-be4f-5fea-875c-3004c8417999", 00:15:36.348 "is_configured": true, 00:15:36.348 "data_offset": 2048, 00:15:36.348 "data_size": 63488 00:15:36.348 }, 00:15:36.348 { 00:15:36.348 "name": "BaseBdev2", 00:15:36.348 "uuid": "442118ef-f36a-59cf-82c5-6ed2e58596ed", 00:15:36.348 "is_configured": true, 00:15:36.348 "data_offset": 2048, 00:15:36.348 "data_size": 63488 00:15:36.348 }, 00:15:36.348 { 00:15:36.348 "name": "BaseBdev3", 00:15:36.348 "uuid": "a25d8167-7005-5385-99be-4baebfae0640", 00:15:36.348 "is_configured": true, 00:15:36.348 "data_offset": 2048, 00:15:36.348 "data_size": 63488 00:15:36.348 }, 00:15:36.348 { 00:15:36.348 "name": "BaseBdev4", 00:15:36.348 "uuid": "a5202ed5-68ac-53fa-a361-33ad99a0d1ec", 00:15:36.348 "is_configured": true, 00:15:36.348 "data_offset": 2048, 00:15:36.348 "data_size": 63488 00:15:36.348 } 00:15:36.348 ] 00:15:36.348 }' 00:15:36.348 04:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.348 04:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:36.348 04:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:36.348 04:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:36.348 04:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:37.730 04:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:37.730 04:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:37.730 04:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.730 04:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:37.730 04:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:37.730 04:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.730 04:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.730 04:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.730 04:31:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.730 04:31:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.730 04:31:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.730 04:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:37.730 "name": "raid_bdev1", 00:15:37.730 "uuid": "6be3f265-7ed0-4617-ac4b-2d08163f3e06", 00:15:37.730 "strip_size_kb": 64, 00:15:37.730 "state": "online", 00:15:37.730 "raid_level": "raid5f", 00:15:37.730 "superblock": true, 00:15:37.730 "num_base_bdevs": 4, 00:15:37.730 "num_base_bdevs_discovered": 4, 00:15:37.730 "num_base_bdevs_operational": 4, 00:15:37.730 "process": { 00:15:37.730 "type": "rebuild", 00:15:37.730 "target": "spare", 00:15:37.730 "progress": { 00:15:37.730 "blocks": 88320, 00:15:37.730 "percent": 46 00:15:37.730 } 00:15:37.730 }, 00:15:37.730 "base_bdevs_list": [ 00:15:37.730 { 00:15:37.730 "name": "spare", 00:15:37.730 "uuid": "d06dec0f-be4f-5fea-875c-3004c8417999", 00:15:37.730 "is_configured": true, 00:15:37.730 "data_offset": 2048, 00:15:37.730 "data_size": 63488 00:15:37.730 }, 00:15:37.730 { 00:15:37.730 "name": "BaseBdev2", 00:15:37.730 "uuid": "442118ef-f36a-59cf-82c5-6ed2e58596ed", 00:15:37.730 "is_configured": true, 00:15:37.730 "data_offset": 2048, 00:15:37.730 "data_size": 63488 00:15:37.730 }, 00:15:37.730 { 00:15:37.730 "name": "BaseBdev3", 00:15:37.730 "uuid": "a25d8167-7005-5385-99be-4baebfae0640", 00:15:37.730 "is_configured": true, 00:15:37.730 "data_offset": 2048, 00:15:37.730 "data_size": 63488 00:15:37.730 }, 00:15:37.730 { 00:15:37.730 "name": "BaseBdev4", 00:15:37.730 "uuid": "a5202ed5-68ac-53fa-a361-33ad99a0d1ec", 00:15:37.730 "is_configured": true, 00:15:37.730 "data_offset": 2048, 00:15:37.730 "data_size": 63488 00:15:37.730 } 00:15:37.730 ] 00:15:37.730 }' 00:15:37.730 04:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.730 04:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:37.730 04:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.730 04:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:37.730 04:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:38.685 04:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:38.685 04:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:38.685 04:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:38.685 04:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:38.685 04:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:38.685 04:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:38.685 04:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.685 04:31:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.685 04:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.685 04:31:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.685 04:31:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.685 04:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:38.685 "name": "raid_bdev1", 00:15:38.685 "uuid": "6be3f265-7ed0-4617-ac4b-2d08163f3e06", 00:15:38.685 "strip_size_kb": 64, 00:15:38.685 "state": "online", 00:15:38.685 "raid_level": "raid5f", 00:15:38.685 "superblock": true, 00:15:38.685 "num_base_bdevs": 4, 00:15:38.685 "num_base_bdevs_discovered": 4, 00:15:38.685 "num_base_bdevs_operational": 4, 00:15:38.685 "process": { 00:15:38.685 "type": "rebuild", 00:15:38.685 "target": "spare", 00:15:38.685 "progress": { 00:15:38.685 "blocks": 109440, 00:15:38.685 "percent": 57 00:15:38.685 } 00:15:38.685 }, 00:15:38.685 "base_bdevs_list": [ 00:15:38.685 { 00:15:38.686 "name": "spare", 00:15:38.686 "uuid": "d06dec0f-be4f-5fea-875c-3004c8417999", 00:15:38.686 "is_configured": true, 00:15:38.686 "data_offset": 2048, 00:15:38.686 "data_size": 63488 00:15:38.686 }, 00:15:38.686 { 00:15:38.686 "name": "BaseBdev2", 00:15:38.686 "uuid": "442118ef-f36a-59cf-82c5-6ed2e58596ed", 00:15:38.686 "is_configured": true, 00:15:38.686 "data_offset": 2048, 00:15:38.686 "data_size": 63488 00:15:38.686 }, 00:15:38.686 { 00:15:38.686 "name": "BaseBdev3", 00:15:38.686 "uuid": "a25d8167-7005-5385-99be-4baebfae0640", 00:15:38.686 "is_configured": true, 00:15:38.686 "data_offset": 2048, 00:15:38.686 "data_size": 63488 00:15:38.686 }, 00:15:38.686 { 00:15:38.686 "name": "BaseBdev4", 00:15:38.686 "uuid": "a5202ed5-68ac-53fa-a361-33ad99a0d1ec", 00:15:38.686 "is_configured": true, 00:15:38.686 "data_offset": 2048, 00:15:38.686 "data_size": 63488 00:15:38.686 } 00:15:38.686 ] 00:15:38.686 }' 00:15:38.686 04:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:38.686 04:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:38.686 04:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:38.686 04:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:38.686 04:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:39.634 04:31:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:39.634 04:31:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:39.634 04:31:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.634 04:31:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:39.634 04:31:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:39.634 04:31:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.634 04:31:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.634 04:31:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.634 04:31:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.634 04:31:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.894 04:31:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.895 04:31:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:39.895 "name": "raid_bdev1", 00:15:39.895 "uuid": "6be3f265-7ed0-4617-ac4b-2d08163f3e06", 00:15:39.895 "strip_size_kb": 64, 00:15:39.895 "state": "online", 00:15:39.895 "raid_level": "raid5f", 00:15:39.895 "superblock": true, 00:15:39.895 "num_base_bdevs": 4, 00:15:39.895 "num_base_bdevs_discovered": 4, 00:15:39.895 "num_base_bdevs_operational": 4, 00:15:39.895 "process": { 00:15:39.895 "type": "rebuild", 00:15:39.895 "target": "spare", 00:15:39.895 "progress": { 00:15:39.895 "blocks": 130560, 00:15:39.895 "percent": 68 00:15:39.895 } 00:15:39.895 }, 00:15:39.895 "base_bdevs_list": [ 00:15:39.895 { 00:15:39.895 "name": "spare", 00:15:39.895 "uuid": "d06dec0f-be4f-5fea-875c-3004c8417999", 00:15:39.895 "is_configured": true, 00:15:39.895 "data_offset": 2048, 00:15:39.895 "data_size": 63488 00:15:39.895 }, 00:15:39.895 { 00:15:39.895 "name": "BaseBdev2", 00:15:39.895 "uuid": "442118ef-f36a-59cf-82c5-6ed2e58596ed", 00:15:39.895 "is_configured": true, 00:15:39.895 "data_offset": 2048, 00:15:39.895 "data_size": 63488 00:15:39.895 }, 00:15:39.895 { 00:15:39.895 "name": "BaseBdev3", 00:15:39.895 "uuid": "a25d8167-7005-5385-99be-4baebfae0640", 00:15:39.895 "is_configured": true, 00:15:39.895 "data_offset": 2048, 00:15:39.895 "data_size": 63488 00:15:39.895 }, 00:15:39.895 { 00:15:39.895 "name": "BaseBdev4", 00:15:39.895 "uuid": "a5202ed5-68ac-53fa-a361-33ad99a0d1ec", 00:15:39.895 "is_configured": true, 00:15:39.895 "data_offset": 2048, 00:15:39.895 "data_size": 63488 00:15:39.895 } 00:15:39.895 ] 00:15:39.895 }' 00:15:39.895 04:31:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:39.895 04:31:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:39.895 04:31:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:39.895 04:31:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:39.895 04:31:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:40.835 04:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:40.835 04:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:40.835 04:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:40.835 04:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:40.835 04:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:40.835 04:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:40.835 04:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.835 04:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.835 04:31:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.835 04:31:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.835 04:31:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.835 04:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:40.835 "name": "raid_bdev1", 00:15:40.835 "uuid": "6be3f265-7ed0-4617-ac4b-2d08163f3e06", 00:15:40.835 "strip_size_kb": 64, 00:15:40.835 "state": "online", 00:15:40.835 "raid_level": "raid5f", 00:15:40.835 "superblock": true, 00:15:40.835 "num_base_bdevs": 4, 00:15:40.835 "num_base_bdevs_discovered": 4, 00:15:40.835 "num_base_bdevs_operational": 4, 00:15:40.835 "process": { 00:15:40.835 "type": "rebuild", 00:15:40.835 "target": "spare", 00:15:40.835 "progress": { 00:15:40.835 "blocks": 153600, 00:15:40.835 "percent": 80 00:15:40.835 } 00:15:40.835 }, 00:15:40.835 "base_bdevs_list": [ 00:15:40.835 { 00:15:40.835 "name": "spare", 00:15:40.835 "uuid": "d06dec0f-be4f-5fea-875c-3004c8417999", 00:15:40.835 "is_configured": true, 00:15:40.835 "data_offset": 2048, 00:15:40.835 "data_size": 63488 00:15:40.835 }, 00:15:40.835 { 00:15:40.835 "name": "BaseBdev2", 00:15:40.835 "uuid": "442118ef-f36a-59cf-82c5-6ed2e58596ed", 00:15:40.835 "is_configured": true, 00:15:40.835 "data_offset": 2048, 00:15:40.835 "data_size": 63488 00:15:40.835 }, 00:15:40.835 { 00:15:40.835 "name": "BaseBdev3", 00:15:40.835 "uuid": "a25d8167-7005-5385-99be-4baebfae0640", 00:15:40.835 "is_configured": true, 00:15:40.835 "data_offset": 2048, 00:15:40.835 "data_size": 63488 00:15:40.835 }, 00:15:40.835 { 00:15:40.835 "name": "BaseBdev4", 00:15:40.835 "uuid": "a5202ed5-68ac-53fa-a361-33ad99a0d1ec", 00:15:40.835 "is_configured": true, 00:15:40.835 "data_offset": 2048, 00:15:40.835 "data_size": 63488 00:15:40.835 } 00:15:40.835 ] 00:15:40.835 }' 00:15:40.835 04:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:41.095 04:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:41.095 04:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:41.095 04:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:41.095 04:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:42.036 04:31:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:42.036 04:31:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:42.036 04:31:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.036 04:31:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:42.036 04:31:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:42.036 04:31:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.036 04:31:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.036 04:31:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.036 04:31:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.036 04:31:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.036 04:31:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.036 04:31:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.036 "name": "raid_bdev1", 00:15:42.036 "uuid": "6be3f265-7ed0-4617-ac4b-2d08163f3e06", 00:15:42.036 "strip_size_kb": 64, 00:15:42.036 "state": "online", 00:15:42.036 "raid_level": "raid5f", 00:15:42.036 "superblock": true, 00:15:42.036 "num_base_bdevs": 4, 00:15:42.036 "num_base_bdevs_discovered": 4, 00:15:42.036 "num_base_bdevs_operational": 4, 00:15:42.036 "process": { 00:15:42.036 "type": "rebuild", 00:15:42.036 "target": "spare", 00:15:42.036 "progress": { 00:15:42.036 "blocks": 176640, 00:15:42.036 "percent": 92 00:15:42.036 } 00:15:42.036 }, 00:15:42.036 "base_bdevs_list": [ 00:15:42.036 { 00:15:42.036 "name": "spare", 00:15:42.037 "uuid": "d06dec0f-be4f-5fea-875c-3004c8417999", 00:15:42.037 "is_configured": true, 00:15:42.037 "data_offset": 2048, 00:15:42.037 "data_size": 63488 00:15:42.037 }, 00:15:42.037 { 00:15:42.037 "name": "BaseBdev2", 00:15:42.037 "uuid": "442118ef-f36a-59cf-82c5-6ed2e58596ed", 00:15:42.037 "is_configured": true, 00:15:42.037 "data_offset": 2048, 00:15:42.037 "data_size": 63488 00:15:42.037 }, 00:15:42.037 { 00:15:42.037 "name": "BaseBdev3", 00:15:42.037 "uuid": "a25d8167-7005-5385-99be-4baebfae0640", 00:15:42.037 "is_configured": true, 00:15:42.037 "data_offset": 2048, 00:15:42.037 "data_size": 63488 00:15:42.037 }, 00:15:42.037 { 00:15:42.037 "name": "BaseBdev4", 00:15:42.037 "uuid": "a5202ed5-68ac-53fa-a361-33ad99a0d1ec", 00:15:42.037 "is_configured": true, 00:15:42.037 "data_offset": 2048, 00:15:42.037 "data_size": 63488 00:15:42.037 } 00:15:42.037 ] 00:15:42.037 }' 00:15:42.037 04:31:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.037 04:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:42.037 04:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.297 04:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:42.297 04:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:42.867 [2024-12-13 04:31:42.755962] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:42.867 [2024-12-13 04:31:42.756095] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:42.867 [2024-12-13 04:31:42.756251] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:43.127 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:43.127 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:43.127 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.127 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:43.127 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:43.127 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.127 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.127 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.127 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.127 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.127 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.127 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.127 "name": "raid_bdev1", 00:15:43.127 "uuid": "6be3f265-7ed0-4617-ac4b-2d08163f3e06", 00:15:43.127 "strip_size_kb": 64, 00:15:43.127 "state": "online", 00:15:43.127 "raid_level": "raid5f", 00:15:43.127 "superblock": true, 00:15:43.127 "num_base_bdevs": 4, 00:15:43.127 "num_base_bdevs_discovered": 4, 00:15:43.127 "num_base_bdevs_operational": 4, 00:15:43.127 "base_bdevs_list": [ 00:15:43.127 { 00:15:43.127 "name": "spare", 00:15:43.127 "uuid": "d06dec0f-be4f-5fea-875c-3004c8417999", 00:15:43.127 "is_configured": true, 00:15:43.127 "data_offset": 2048, 00:15:43.127 "data_size": 63488 00:15:43.127 }, 00:15:43.127 { 00:15:43.127 "name": "BaseBdev2", 00:15:43.127 "uuid": "442118ef-f36a-59cf-82c5-6ed2e58596ed", 00:15:43.127 "is_configured": true, 00:15:43.127 "data_offset": 2048, 00:15:43.127 "data_size": 63488 00:15:43.127 }, 00:15:43.127 { 00:15:43.127 "name": "BaseBdev3", 00:15:43.127 "uuid": "a25d8167-7005-5385-99be-4baebfae0640", 00:15:43.127 "is_configured": true, 00:15:43.127 "data_offset": 2048, 00:15:43.127 "data_size": 63488 00:15:43.127 }, 00:15:43.127 { 00:15:43.127 "name": "BaseBdev4", 00:15:43.127 "uuid": "a5202ed5-68ac-53fa-a361-33ad99a0d1ec", 00:15:43.127 "is_configured": true, 00:15:43.127 "data_offset": 2048, 00:15:43.127 "data_size": 63488 00:15:43.127 } 00:15:43.127 ] 00:15:43.127 }' 00:15:43.387 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.387 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:43.387 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.387 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:43.387 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:43.387 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:43.387 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.388 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:43.388 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:43.388 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.388 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.388 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.388 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.388 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.388 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.388 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.388 "name": "raid_bdev1", 00:15:43.388 "uuid": "6be3f265-7ed0-4617-ac4b-2d08163f3e06", 00:15:43.388 "strip_size_kb": 64, 00:15:43.388 "state": "online", 00:15:43.388 "raid_level": "raid5f", 00:15:43.388 "superblock": true, 00:15:43.388 "num_base_bdevs": 4, 00:15:43.388 "num_base_bdevs_discovered": 4, 00:15:43.388 "num_base_bdevs_operational": 4, 00:15:43.388 "base_bdevs_list": [ 00:15:43.388 { 00:15:43.388 "name": "spare", 00:15:43.388 "uuid": "d06dec0f-be4f-5fea-875c-3004c8417999", 00:15:43.388 "is_configured": true, 00:15:43.388 "data_offset": 2048, 00:15:43.388 "data_size": 63488 00:15:43.388 }, 00:15:43.388 { 00:15:43.388 "name": "BaseBdev2", 00:15:43.388 "uuid": "442118ef-f36a-59cf-82c5-6ed2e58596ed", 00:15:43.388 "is_configured": true, 00:15:43.388 "data_offset": 2048, 00:15:43.388 "data_size": 63488 00:15:43.388 }, 00:15:43.388 { 00:15:43.388 "name": "BaseBdev3", 00:15:43.388 "uuid": "a25d8167-7005-5385-99be-4baebfae0640", 00:15:43.388 "is_configured": true, 00:15:43.388 "data_offset": 2048, 00:15:43.388 "data_size": 63488 00:15:43.388 }, 00:15:43.388 { 00:15:43.388 "name": "BaseBdev4", 00:15:43.388 "uuid": "a5202ed5-68ac-53fa-a361-33ad99a0d1ec", 00:15:43.388 "is_configured": true, 00:15:43.388 "data_offset": 2048, 00:15:43.388 "data_size": 63488 00:15:43.388 } 00:15:43.388 ] 00:15:43.388 }' 00:15:43.388 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.388 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:43.388 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.388 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:43.388 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:43.388 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:43.388 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:43.388 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:43.388 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.388 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:43.388 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.388 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.388 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.388 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.388 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.388 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.388 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.388 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.388 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.388 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.388 "name": "raid_bdev1", 00:15:43.388 "uuid": "6be3f265-7ed0-4617-ac4b-2d08163f3e06", 00:15:43.388 "strip_size_kb": 64, 00:15:43.388 "state": "online", 00:15:43.388 "raid_level": "raid5f", 00:15:43.388 "superblock": true, 00:15:43.388 "num_base_bdevs": 4, 00:15:43.388 "num_base_bdevs_discovered": 4, 00:15:43.388 "num_base_bdevs_operational": 4, 00:15:43.388 "base_bdevs_list": [ 00:15:43.388 { 00:15:43.388 "name": "spare", 00:15:43.388 "uuid": "d06dec0f-be4f-5fea-875c-3004c8417999", 00:15:43.388 "is_configured": true, 00:15:43.388 "data_offset": 2048, 00:15:43.388 "data_size": 63488 00:15:43.388 }, 00:15:43.388 { 00:15:43.388 "name": "BaseBdev2", 00:15:43.388 "uuid": "442118ef-f36a-59cf-82c5-6ed2e58596ed", 00:15:43.388 "is_configured": true, 00:15:43.388 "data_offset": 2048, 00:15:43.388 "data_size": 63488 00:15:43.388 }, 00:15:43.388 { 00:15:43.388 "name": "BaseBdev3", 00:15:43.388 "uuid": "a25d8167-7005-5385-99be-4baebfae0640", 00:15:43.388 "is_configured": true, 00:15:43.388 "data_offset": 2048, 00:15:43.388 "data_size": 63488 00:15:43.388 }, 00:15:43.388 { 00:15:43.388 "name": "BaseBdev4", 00:15:43.388 "uuid": "a5202ed5-68ac-53fa-a361-33ad99a0d1ec", 00:15:43.388 "is_configured": true, 00:15:43.388 "data_offset": 2048, 00:15:43.388 "data_size": 63488 00:15:43.388 } 00:15:43.388 ] 00:15:43.388 }' 00:15:43.388 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.388 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.958 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:43.958 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.958 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.958 [2024-12-13 04:31:43.812562] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:43.958 [2024-12-13 04:31:43.812636] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:43.958 [2024-12-13 04:31:43.812743] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:43.958 [2024-12-13 04:31:43.812869] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:43.958 [2024-12-13 04:31:43.812921] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:15:43.958 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.958 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.958 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:43.958 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.958 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.958 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.958 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:43.958 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:43.958 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:43.958 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:43.958 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:43.958 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:43.958 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:43.958 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:43.958 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:43.958 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:43.958 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:43.958 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:43.958 04:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:44.218 /dev/nbd0 00:15:44.218 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:44.218 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:44.218 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:44.218 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:44.218 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:44.218 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:44.218 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:44.218 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:44.218 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:44.218 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:44.218 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:44.218 1+0 records in 00:15:44.218 1+0 records out 00:15:44.218 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000516838 s, 7.9 MB/s 00:15:44.218 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.218 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:44.218 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.218 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:44.218 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:44.218 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:44.218 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:44.218 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:44.478 /dev/nbd1 00:15:44.478 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:44.478 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:44.478 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:44.478 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:44.478 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:44.478 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:44.478 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:44.478 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:44.478 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:44.478 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:44.478 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:44.478 1+0 records in 00:15:44.478 1+0 records out 00:15:44.478 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000479329 s, 8.5 MB/s 00:15:44.478 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.478 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:44.478 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.478 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:44.478 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:44.478 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:44.478 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:44.478 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:44.478 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:44.478 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:44.478 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:44.478 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:44.478 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:44.478 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:44.478 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:44.738 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:44.738 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:44.738 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:44.738 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:44.738 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:44.738 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:44.738 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:44.738 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:44.738 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:44.738 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:44.999 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:44.999 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:44.999 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:44.999 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:44.999 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:44.999 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:44.999 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:44.999 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:44.999 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:44.999 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:44.999 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.999 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.999 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.999 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:44.999 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.999 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.999 [2024-12-13 04:31:44.926292] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:44.999 [2024-12-13 04:31:44.926370] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:44.999 [2024-12-13 04:31:44.926397] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:44.999 [2024-12-13 04:31:44.926408] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:44.999 [2024-12-13 04:31:44.928800] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:44.999 [2024-12-13 04:31:44.928844] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:44.999 [2024-12-13 04:31:44.928934] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:44.999 [2024-12-13 04:31:44.928988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:44.999 [2024-12-13 04:31:44.929120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:44.999 [2024-12-13 04:31:44.929219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:44.999 [2024-12-13 04:31:44.929280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:44.999 spare 00:15:44.999 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.999 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:44.999 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.999 04:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.259 [2024-12-13 04:31:45.029176] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:15:45.259 [2024-12-13 04:31:45.029201] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:45.259 [2024-12-13 04:31:45.029497] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000045820 00:15:45.259 [2024-12-13 04:31:45.029993] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:15:45.259 [2024-12-13 04:31:45.030021] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:15:45.259 [2024-12-13 04:31:45.030169] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:45.259 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.259 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:45.259 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:45.259 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:45.259 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.259 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.259 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:45.259 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.259 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.259 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.259 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.259 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.259 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.259 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.259 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.259 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.259 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.259 "name": "raid_bdev1", 00:15:45.259 "uuid": "6be3f265-7ed0-4617-ac4b-2d08163f3e06", 00:15:45.259 "strip_size_kb": 64, 00:15:45.259 "state": "online", 00:15:45.259 "raid_level": "raid5f", 00:15:45.259 "superblock": true, 00:15:45.259 "num_base_bdevs": 4, 00:15:45.259 "num_base_bdevs_discovered": 4, 00:15:45.259 "num_base_bdevs_operational": 4, 00:15:45.259 "base_bdevs_list": [ 00:15:45.259 { 00:15:45.259 "name": "spare", 00:15:45.259 "uuid": "d06dec0f-be4f-5fea-875c-3004c8417999", 00:15:45.259 "is_configured": true, 00:15:45.259 "data_offset": 2048, 00:15:45.259 "data_size": 63488 00:15:45.259 }, 00:15:45.259 { 00:15:45.259 "name": "BaseBdev2", 00:15:45.259 "uuid": "442118ef-f36a-59cf-82c5-6ed2e58596ed", 00:15:45.259 "is_configured": true, 00:15:45.259 "data_offset": 2048, 00:15:45.259 "data_size": 63488 00:15:45.259 }, 00:15:45.259 { 00:15:45.259 "name": "BaseBdev3", 00:15:45.259 "uuid": "a25d8167-7005-5385-99be-4baebfae0640", 00:15:45.259 "is_configured": true, 00:15:45.259 "data_offset": 2048, 00:15:45.259 "data_size": 63488 00:15:45.259 }, 00:15:45.259 { 00:15:45.260 "name": "BaseBdev4", 00:15:45.260 "uuid": "a5202ed5-68ac-53fa-a361-33ad99a0d1ec", 00:15:45.260 "is_configured": true, 00:15:45.260 "data_offset": 2048, 00:15:45.260 "data_size": 63488 00:15:45.260 } 00:15:45.260 ] 00:15:45.260 }' 00:15:45.260 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.260 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.518 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:45.518 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.518 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:45.518 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:45.518 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.518 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.518 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.518 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.518 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.778 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.778 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.778 "name": "raid_bdev1", 00:15:45.778 "uuid": "6be3f265-7ed0-4617-ac4b-2d08163f3e06", 00:15:45.778 "strip_size_kb": 64, 00:15:45.778 "state": "online", 00:15:45.778 "raid_level": "raid5f", 00:15:45.778 "superblock": true, 00:15:45.778 "num_base_bdevs": 4, 00:15:45.778 "num_base_bdevs_discovered": 4, 00:15:45.778 "num_base_bdevs_operational": 4, 00:15:45.778 "base_bdevs_list": [ 00:15:45.778 { 00:15:45.778 "name": "spare", 00:15:45.778 "uuid": "d06dec0f-be4f-5fea-875c-3004c8417999", 00:15:45.778 "is_configured": true, 00:15:45.778 "data_offset": 2048, 00:15:45.778 "data_size": 63488 00:15:45.778 }, 00:15:45.778 { 00:15:45.778 "name": "BaseBdev2", 00:15:45.778 "uuid": "442118ef-f36a-59cf-82c5-6ed2e58596ed", 00:15:45.778 "is_configured": true, 00:15:45.778 "data_offset": 2048, 00:15:45.778 "data_size": 63488 00:15:45.778 }, 00:15:45.778 { 00:15:45.778 "name": "BaseBdev3", 00:15:45.778 "uuid": "a25d8167-7005-5385-99be-4baebfae0640", 00:15:45.778 "is_configured": true, 00:15:45.778 "data_offset": 2048, 00:15:45.778 "data_size": 63488 00:15:45.778 }, 00:15:45.778 { 00:15:45.778 "name": "BaseBdev4", 00:15:45.778 "uuid": "a5202ed5-68ac-53fa-a361-33ad99a0d1ec", 00:15:45.778 "is_configured": true, 00:15:45.778 "data_offset": 2048, 00:15:45.778 "data_size": 63488 00:15:45.778 } 00:15:45.778 ] 00:15:45.778 }' 00:15:45.778 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.778 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:45.778 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.778 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:45.778 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.778 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:45.778 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.778 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.778 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.778 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:45.778 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:45.778 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.778 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.778 [2024-12-13 04:31:45.704995] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:45.778 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.778 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:45.778 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:45.778 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:45.778 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.778 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.778 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:45.778 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.778 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.778 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.778 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.778 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.778 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.778 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.778 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.778 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.778 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.778 "name": "raid_bdev1", 00:15:45.778 "uuid": "6be3f265-7ed0-4617-ac4b-2d08163f3e06", 00:15:45.778 "strip_size_kb": 64, 00:15:45.778 "state": "online", 00:15:45.778 "raid_level": "raid5f", 00:15:45.778 "superblock": true, 00:15:45.778 "num_base_bdevs": 4, 00:15:45.778 "num_base_bdevs_discovered": 3, 00:15:45.778 "num_base_bdevs_operational": 3, 00:15:45.778 "base_bdevs_list": [ 00:15:45.778 { 00:15:45.778 "name": null, 00:15:45.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.778 "is_configured": false, 00:15:45.778 "data_offset": 0, 00:15:45.778 "data_size": 63488 00:15:45.778 }, 00:15:45.778 { 00:15:45.778 "name": "BaseBdev2", 00:15:45.778 "uuid": "442118ef-f36a-59cf-82c5-6ed2e58596ed", 00:15:45.778 "is_configured": true, 00:15:45.778 "data_offset": 2048, 00:15:45.778 "data_size": 63488 00:15:45.778 }, 00:15:45.778 { 00:15:45.778 "name": "BaseBdev3", 00:15:45.778 "uuid": "a25d8167-7005-5385-99be-4baebfae0640", 00:15:45.778 "is_configured": true, 00:15:45.778 "data_offset": 2048, 00:15:45.778 "data_size": 63488 00:15:45.778 }, 00:15:45.778 { 00:15:45.778 "name": "BaseBdev4", 00:15:45.778 "uuid": "a5202ed5-68ac-53fa-a361-33ad99a0d1ec", 00:15:45.778 "is_configured": true, 00:15:45.778 "data_offset": 2048, 00:15:45.778 "data_size": 63488 00:15:45.778 } 00:15:45.778 ] 00:15:45.778 }' 00:15:45.778 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.778 04:31:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.348 04:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:46.348 04:31:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.348 04:31:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.348 [2024-12-13 04:31:46.176576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:46.348 [2024-12-13 04:31:46.176767] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:46.348 [2024-12-13 04:31:46.176826] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:46.348 [2024-12-13 04:31:46.176892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:46.348 [2024-12-13 04:31:46.183870] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000458f0 00:15:46.348 04:31:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.348 04:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:46.348 [2024-12-13 04:31:46.186329] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:47.288 04:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:47.288 04:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:47.288 04:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:47.288 04:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:47.288 04:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:47.288 04:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.288 04:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.288 04:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.288 04:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.288 04:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.288 04:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:47.288 "name": "raid_bdev1", 00:15:47.288 "uuid": "6be3f265-7ed0-4617-ac4b-2d08163f3e06", 00:15:47.288 "strip_size_kb": 64, 00:15:47.288 "state": "online", 00:15:47.288 "raid_level": "raid5f", 00:15:47.288 "superblock": true, 00:15:47.288 "num_base_bdevs": 4, 00:15:47.288 "num_base_bdevs_discovered": 4, 00:15:47.288 "num_base_bdevs_operational": 4, 00:15:47.288 "process": { 00:15:47.288 "type": "rebuild", 00:15:47.288 "target": "spare", 00:15:47.288 "progress": { 00:15:47.288 "blocks": 19200, 00:15:47.288 "percent": 10 00:15:47.288 } 00:15:47.288 }, 00:15:47.288 "base_bdevs_list": [ 00:15:47.288 { 00:15:47.288 "name": "spare", 00:15:47.288 "uuid": "d06dec0f-be4f-5fea-875c-3004c8417999", 00:15:47.288 "is_configured": true, 00:15:47.288 "data_offset": 2048, 00:15:47.288 "data_size": 63488 00:15:47.288 }, 00:15:47.288 { 00:15:47.288 "name": "BaseBdev2", 00:15:47.288 "uuid": "442118ef-f36a-59cf-82c5-6ed2e58596ed", 00:15:47.288 "is_configured": true, 00:15:47.288 "data_offset": 2048, 00:15:47.288 "data_size": 63488 00:15:47.288 }, 00:15:47.288 { 00:15:47.288 "name": "BaseBdev3", 00:15:47.288 "uuid": "a25d8167-7005-5385-99be-4baebfae0640", 00:15:47.288 "is_configured": true, 00:15:47.288 "data_offset": 2048, 00:15:47.288 "data_size": 63488 00:15:47.288 }, 00:15:47.288 { 00:15:47.288 "name": "BaseBdev4", 00:15:47.288 "uuid": "a5202ed5-68ac-53fa-a361-33ad99a0d1ec", 00:15:47.288 "is_configured": true, 00:15:47.288 "data_offset": 2048, 00:15:47.288 "data_size": 63488 00:15:47.288 } 00:15:47.288 ] 00:15:47.288 }' 00:15:47.288 04:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:47.288 04:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:47.288 04:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:47.548 04:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:47.548 04:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:47.548 04:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.548 04:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.548 [2024-12-13 04:31:47.349564] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:47.548 [2024-12-13 04:31:47.392552] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:47.548 [2024-12-13 04:31:47.392601] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:47.548 [2024-12-13 04:31:47.392621] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:47.549 [2024-12-13 04:31:47.392628] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:47.549 04:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.549 04:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:47.549 04:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:47.549 04:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:47.549 04:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:47.549 04:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.549 04:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:47.549 04:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.549 04:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.549 04:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.549 04:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.549 04:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.549 04:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.549 04:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.549 04:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.549 04:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.549 04:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.549 "name": "raid_bdev1", 00:15:47.549 "uuid": "6be3f265-7ed0-4617-ac4b-2d08163f3e06", 00:15:47.549 "strip_size_kb": 64, 00:15:47.549 "state": "online", 00:15:47.549 "raid_level": "raid5f", 00:15:47.549 "superblock": true, 00:15:47.549 "num_base_bdevs": 4, 00:15:47.549 "num_base_bdevs_discovered": 3, 00:15:47.549 "num_base_bdevs_operational": 3, 00:15:47.549 "base_bdevs_list": [ 00:15:47.549 { 00:15:47.549 "name": null, 00:15:47.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.549 "is_configured": false, 00:15:47.549 "data_offset": 0, 00:15:47.549 "data_size": 63488 00:15:47.549 }, 00:15:47.549 { 00:15:47.549 "name": "BaseBdev2", 00:15:47.549 "uuid": "442118ef-f36a-59cf-82c5-6ed2e58596ed", 00:15:47.549 "is_configured": true, 00:15:47.549 "data_offset": 2048, 00:15:47.549 "data_size": 63488 00:15:47.549 }, 00:15:47.549 { 00:15:47.549 "name": "BaseBdev3", 00:15:47.549 "uuid": "a25d8167-7005-5385-99be-4baebfae0640", 00:15:47.549 "is_configured": true, 00:15:47.549 "data_offset": 2048, 00:15:47.549 "data_size": 63488 00:15:47.549 }, 00:15:47.549 { 00:15:47.549 "name": "BaseBdev4", 00:15:47.549 "uuid": "a5202ed5-68ac-53fa-a361-33ad99a0d1ec", 00:15:47.549 "is_configured": true, 00:15:47.549 "data_offset": 2048, 00:15:47.549 "data_size": 63488 00:15:47.549 } 00:15:47.549 ] 00:15:47.549 }' 00:15:47.549 04:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.549 04:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.809 04:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:47.809 04:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.809 04:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.809 [2024-12-13 04:31:47.816554] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:47.809 [2024-12-13 04:31:47.816652] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:47.809 [2024-12-13 04:31:47.816703] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:15:47.809 [2024-12-13 04:31:47.816730] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:47.809 [2024-12-13 04:31:47.817234] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:47.809 [2024-12-13 04:31:47.817296] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:47.809 [2024-12-13 04:31:47.817413] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:47.809 [2024-12-13 04:31:47.817471] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:47.809 [2024-12-13 04:31:47.817526] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:47.809 [2024-12-13 04:31:47.817589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:47.809 [2024-12-13 04:31:47.822213] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000459c0 00:15:47.809 spare 00:15:47.809 04:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.809 04:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:47.809 [2024-12-13 04:31:47.824705] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:49.192 04:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:49.192 04:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:49.192 04:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:49.192 04:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:49.192 04:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:49.192 04:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.192 04:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.192 04:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.192 04:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.192 04:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.192 04:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:49.192 "name": "raid_bdev1", 00:15:49.192 "uuid": "6be3f265-7ed0-4617-ac4b-2d08163f3e06", 00:15:49.192 "strip_size_kb": 64, 00:15:49.192 "state": "online", 00:15:49.192 "raid_level": "raid5f", 00:15:49.192 "superblock": true, 00:15:49.192 "num_base_bdevs": 4, 00:15:49.192 "num_base_bdevs_discovered": 4, 00:15:49.192 "num_base_bdevs_operational": 4, 00:15:49.192 "process": { 00:15:49.192 "type": "rebuild", 00:15:49.192 "target": "spare", 00:15:49.192 "progress": { 00:15:49.192 "blocks": 19200, 00:15:49.192 "percent": 10 00:15:49.192 } 00:15:49.192 }, 00:15:49.192 "base_bdevs_list": [ 00:15:49.192 { 00:15:49.192 "name": "spare", 00:15:49.192 "uuid": "d06dec0f-be4f-5fea-875c-3004c8417999", 00:15:49.192 "is_configured": true, 00:15:49.192 "data_offset": 2048, 00:15:49.192 "data_size": 63488 00:15:49.192 }, 00:15:49.192 { 00:15:49.192 "name": "BaseBdev2", 00:15:49.192 "uuid": "442118ef-f36a-59cf-82c5-6ed2e58596ed", 00:15:49.192 "is_configured": true, 00:15:49.192 "data_offset": 2048, 00:15:49.192 "data_size": 63488 00:15:49.192 }, 00:15:49.192 { 00:15:49.192 "name": "BaseBdev3", 00:15:49.192 "uuid": "a25d8167-7005-5385-99be-4baebfae0640", 00:15:49.192 "is_configured": true, 00:15:49.192 "data_offset": 2048, 00:15:49.192 "data_size": 63488 00:15:49.192 }, 00:15:49.192 { 00:15:49.192 "name": "BaseBdev4", 00:15:49.192 "uuid": "a5202ed5-68ac-53fa-a361-33ad99a0d1ec", 00:15:49.192 "is_configured": true, 00:15:49.192 "data_offset": 2048, 00:15:49.192 "data_size": 63488 00:15:49.192 } 00:15:49.192 ] 00:15:49.192 }' 00:15:49.192 04:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:49.192 04:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:49.192 04:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:49.192 04:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:49.192 04:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:49.192 04:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.192 04:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.192 [2024-12-13 04:31:48.964592] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:49.192 [2024-12-13 04:31:49.031012] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:49.192 [2024-12-13 04:31:49.031080] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.192 [2024-12-13 04:31:49.031097] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:49.192 [2024-12-13 04:31:49.031107] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:49.192 04:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.192 04:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:49.192 04:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.192 04:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:49.192 04:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:49.192 04:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.192 04:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:49.192 04:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.192 04:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.192 04:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.192 04:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.192 04:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.192 04:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.192 04:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.192 04:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.192 04:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.192 04:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.192 "name": "raid_bdev1", 00:15:49.192 "uuid": "6be3f265-7ed0-4617-ac4b-2d08163f3e06", 00:15:49.192 "strip_size_kb": 64, 00:15:49.192 "state": "online", 00:15:49.192 "raid_level": "raid5f", 00:15:49.192 "superblock": true, 00:15:49.192 "num_base_bdevs": 4, 00:15:49.192 "num_base_bdevs_discovered": 3, 00:15:49.192 "num_base_bdevs_operational": 3, 00:15:49.192 "base_bdevs_list": [ 00:15:49.192 { 00:15:49.192 "name": null, 00:15:49.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.192 "is_configured": false, 00:15:49.192 "data_offset": 0, 00:15:49.192 "data_size": 63488 00:15:49.192 }, 00:15:49.192 { 00:15:49.192 "name": "BaseBdev2", 00:15:49.192 "uuid": "442118ef-f36a-59cf-82c5-6ed2e58596ed", 00:15:49.192 "is_configured": true, 00:15:49.192 "data_offset": 2048, 00:15:49.192 "data_size": 63488 00:15:49.192 }, 00:15:49.192 { 00:15:49.192 "name": "BaseBdev3", 00:15:49.192 "uuid": "a25d8167-7005-5385-99be-4baebfae0640", 00:15:49.192 "is_configured": true, 00:15:49.192 "data_offset": 2048, 00:15:49.192 "data_size": 63488 00:15:49.192 }, 00:15:49.192 { 00:15:49.192 "name": "BaseBdev4", 00:15:49.192 "uuid": "a5202ed5-68ac-53fa-a361-33ad99a0d1ec", 00:15:49.192 "is_configured": true, 00:15:49.192 "data_offset": 2048, 00:15:49.192 "data_size": 63488 00:15:49.192 } 00:15:49.192 ] 00:15:49.192 }' 00:15:49.192 04:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.192 04:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.763 04:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:49.763 04:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:49.763 04:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:49.763 04:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:49.763 04:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:49.763 04:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.763 04:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.763 04:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.763 04:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.763 04:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.763 04:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:49.763 "name": "raid_bdev1", 00:15:49.763 "uuid": "6be3f265-7ed0-4617-ac4b-2d08163f3e06", 00:15:49.763 "strip_size_kb": 64, 00:15:49.763 "state": "online", 00:15:49.763 "raid_level": "raid5f", 00:15:49.763 "superblock": true, 00:15:49.763 "num_base_bdevs": 4, 00:15:49.763 "num_base_bdevs_discovered": 3, 00:15:49.763 "num_base_bdevs_operational": 3, 00:15:49.763 "base_bdevs_list": [ 00:15:49.763 { 00:15:49.763 "name": null, 00:15:49.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.763 "is_configured": false, 00:15:49.763 "data_offset": 0, 00:15:49.763 "data_size": 63488 00:15:49.763 }, 00:15:49.763 { 00:15:49.763 "name": "BaseBdev2", 00:15:49.763 "uuid": "442118ef-f36a-59cf-82c5-6ed2e58596ed", 00:15:49.763 "is_configured": true, 00:15:49.763 "data_offset": 2048, 00:15:49.763 "data_size": 63488 00:15:49.763 }, 00:15:49.763 { 00:15:49.763 "name": "BaseBdev3", 00:15:49.763 "uuid": "a25d8167-7005-5385-99be-4baebfae0640", 00:15:49.763 "is_configured": true, 00:15:49.763 "data_offset": 2048, 00:15:49.763 "data_size": 63488 00:15:49.763 }, 00:15:49.763 { 00:15:49.763 "name": "BaseBdev4", 00:15:49.763 "uuid": "a5202ed5-68ac-53fa-a361-33ad99a0d1ec", 00:15:49.763 "is_configured": true, 00:15:49.763 "data_offset": 2048, 00:15:49.763 "data_size": 63488 00:15:49.763 } 00:15:49.763 ] 00:15:49.763 }' 00:15:49.763 04:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:49.763 04:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:49.763 04:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:49.763 04:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:49.763 04:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:49.763 04:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.763 04:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.763 04:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.763 04:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:49.763 04:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.763 04:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.763 [2024-12-13 04:31:49.646016] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:49.763 [2024-12-13 04:31:49.646067] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.763 [2024-12-13 04:31:49.646088] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:49.763 [2024-12-13 04:31:49.646099] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.763 [2024-12-13 04:31:49.646540] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.763 [2024-12-13 04:31:49.646562] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:49.763 [2024-12-13 04:31:49.646628] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:49.763 [2024-12-13 04:31:49.646647] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:49.763 [2024-12-13 04:31:49.646655] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:49.763 [2024-12-13 04:31:49.646680] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:49.763 BaseBdev1 00:15:49.763 04:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.763 04:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:50.703 04:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:50.703 04:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:50.703 04:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:50.703 04:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.703 04:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.703 04:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:50.703 04:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.703 04:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.703 04:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.703 04:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.703 04:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.703 04:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.703 04:31:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.703 04:31:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.703 04:31:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.703 04:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.703 "name": "raid_bdev1", 00:15:50.703 "uuid": "6be3f265-7ed0-4617-ac4b-2d08163f3e06", 00:15:50.703 "strip_size_kb": 64, 00:15:50.703 "state": "online", 00:15:50.703 "raid_level": "raid5f", 00:15:50.703 "superblock": true, 00:15:50.703 "num_base_bdevs": 4, 00:15:50.703 "num_base_bdevs_discovered": 3, 00:15:50.703 "num_base_bdevs_operational": 3, 00:15:50.703 "base_bdevs_list": [ 00:15:50.703 { 00:15:50.703 "name": null, 00:15:50.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.703 "is_configured": false, 00:15:50.703 "data_offset": 0, 00:15:50.703 "data_size": 63488 00:15:50.703 }, 00:15:50.703 { 00:15:50.703 "name": "BaseBdev2", 00:15:50.703 "uuid": "442118ef-f36a-59cf-82c5-6ed2e58596ed", 00:15:50.703 "is_configured": true, 00:15:50.703 "data_offset": 2048, 00:15:50.703 "data_size": 63488 00:15:50.703 }, 00:15:50.703 { 00:15:50.703 "name": "BaseBdev3", 00:15:50.703 "uuid": "a25d8167-7005-5385-99be-4baebfae0640", 00:15:50.703 "is_configured": true, 00:15:50.703 "data_offset": 2048, 00:15:50.703 "data_size": 63488 00:15:50.703 }, 00:15:50.703 { 00:15:50.703 "name": "BaseBdev4", 00:15:50.703 "uuid": "a5202ed5-68ac-53fa-a361-33ad99a0d1ec", 00:15:50.703 "is_configured": true, 00:15:50.703 "data_offset": 2048, 00:15:50.703 "data_size": 63488 00:15:50.703 } 00:15:50.703 ] 00:15:50.703 }' 00:15:50.703 04:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.703 04:31:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.274 04:31:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:51.274 04:31:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:51.274 04:31:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:51.274 04:31:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:51.274 04:31:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:51.274 04:31:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.274 04:31:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.274 04:31:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.274 04:31:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.274 04:31:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.274 04:31:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:51.274 "name": "raid_bdev1", 00:15:51.274 "uuid": "6be3f265-7ed0-4617-ac4b-2d08163f3e06", 00:15:51.274 "strip_size_kb": 64, 00:15:51.274 "state": "online", 00:15:51.274 "raid_level": "raid5f", 00:15:51.274 "superblock": true, 00:15:51.274 "num_base_bdevs": 4, 00:15:51.274 "num_base_bdevs_discovered": 3, 00:15:51.274 "num_base_bdevs_operational": 3, 00:15:51.274 "base_bdevs_list": [ 00:15:51.274 { 00:15:51.274 "name": null, 00:15:51.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.274 "is_configured": false, 00:15:51.274 "data_offset": 0, 00:15:51.274 "data_size": 63488 00:15:51.274 }, 00:15:51.274 { 00:15:51.274 "name": "BaseBdev2", 00:15:51.274 "uuid": "442118ef-f36a-59cf-82c5-6ed2e58596ed", 00:15:51.274 "is_configured": true, 00:15:51.274 "data_offset": 2048, 00:15:51.274 "data_size": 63488 00:15:51.274 }, 00:15:51.274 { 00:15:51.274 "name": "BaseBdev3", 00:15:51.274 "uuid": "a25d8167-7005-5385-99be-4baebfae0640", 00:15:51.274 "is_configured": true, 00:15:51.274 "data_offset": 2048, 00:15:51.274 "data_size": 63488 00:15:51.274 }, 00:15:51.274 { 00:15:51.274 "name": "BaseBdev4", 00:15:51.274 "uuid": "a5202ed5-68ac-53fa-a361-33ad99a0d1ec", 00:15:51.274 "is_configured": true, 00:15:51.274 "data_offset": 2048, 00:15:51.274 "data_size": 63488 00:15:51.274 } 00:15:51.274 ] 00:15:51.274 }' 00:15:51.274 04:31:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:51.274 04:31:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:51.274 04:31:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:51.274 04:31:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:51.274 04:31:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:51.274 04:31:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:15:51.274 04:31:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:51.274 04:31:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:51.274 04:31:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:51.274 04:31:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:51.275 04:31:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:51.275 04:31:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:51.275 04:31:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.275 04:31:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.275 [2024-12-13 04:31:51.271322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:51.275 [2024-12-13 04:31:51.271519] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:51.275 [2024-12-13 04:31:51.271539] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:51.275 request: 00:15:51.275 { 00:15:51.275 "base_bdev": "BaseBdev1", 00:15:51.275 "raid_bdev": "raid_bdev1", 00:15:51.275 "method": "bdev_raid_add_base_bdev", 00:15:51.275 "req_id": 1 00:15:51.275 } 00:15:51.275 Got JSON-RPC error response 00:15:51.275 response: 00:15:51.275 { 00:15:51.275 "code": -22, 00:15:51.275 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:51.275 } 00:15:51.275 04:31:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:51.275 04:31:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:15:51.275 04:31:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:51.275 04:31:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:51.275 04:31:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:51.275 04:31:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:52.657 04:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:52.657 04:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:52.657 04:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.657 04:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:52.657 04:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:52.657 04:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:52.657 04:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.657 04:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.657 04:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.657 04:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.657 04:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.657 04:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.657 04:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.657 04:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.657 04:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.657 04:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.657 "name": "raid_bdev1", 00:15:52.657 "uuid": "6be3f265-7ed0-4617-ac4b-2d08163f3e06", 00:15:52.657 "strip_size_kb": 64, 00:15:52.657 "state": "online", 00:15:52.657 "raid_level": "raid5f", 00:15:52.657 "superblock": true, 00:15:52.657 "num_base_bdevs": 4, 00:15:52.657 "num_base_bdevs_discovered": 3, 00:15:52.657 "num_base_bdevs_operational": 3, 00:15:52.657 "base_bdevs_list": [ 00:15:52.657 { 00:15:52.657 "name": null, 00:15:52.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.657 "is_configured": false, 00:15:52.657 "data_offset": 0, 00:15:52.657 "data_size": 63488 00:15:52.657 }, 00:15:52.657 { 00:15:52.657 "name": "BaseBdev2", 00:15:52.657 "uuid": "442118ef-f36a-59cf-82c5-6ed2e58596ed", 00:15:52.657 "is_configured": true, 00:15:52.657 "data_offset": 2048, 00:15:52.657 "data_size": 63488 00:15:52.657 }, 00:15:52.657 { 00:15:52.657 "name": "BaseBdev3", 00:15:52.657 "uuid": "a25d8167-7005-5385-99be-4baebfae0640", 00:15:52.657 "is_configured": true, 00:15:52.657 "data_offset": 2048, 00:15:52.657 "data_size": 63488 00:15:52.657 }, 00:15:52.657 { 00:15:52.657 "name": "BaseBdev4", 00:15:52.657 "uuid": "a5202ed5-68ac-53fa-a361-33ad99a0d1ec", 00:15:52.657 "is_configured": true, 00:15:52.657 "data_offset": 2048, 00:15:52.657 "data_size": 63488 00:15:52.657 } 00:15:52.657 ] 00:15:52.657 }' 00:15:52.657 04:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.657 04:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.917 04:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:52.917 04:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:52.917 04:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:52.917 04:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:52.917 04:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:52.917 04:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.917 04:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.917 04:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.917 04:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.917 04:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.917 04:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:52.917 "name": "raid_bdev1", 00:15:52.917 "uuid": "6be3f265-7ed0-4617-ac4b-2d08163f3e06", 00:15:52.917 "strip_size_kb": 64, 00:15:52.917 "state": "online", 00:15:52.917 "raid_level": "raid5f", 00:15:52.917 "superblock": true, 00:15:52.917 "num_base_bdevs": 4, 00:15:52.917 "num_base_bdevs_discovered": 3, 00:15:52.917 "num_base_bdevs_operational": 3, 00:15:52.917 "base_bdevs_list": [ 00:15:52.917 { 00:15:52.917 "name": null, 00:15:52.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.917 "is_configured": false, 00:15:52.917 "data_offset": 0, 00:15:52.917 "data_size": 63488 00:15:52.917 }, 00:15:52.917 { 00:15:52.917 "name": "BaseBdev2", 00:15:52.917 "uuid": "442118ef-f36a-59cf-82c5-6ed2e58596ed", 00:15:52.917 "is_configured": true, 00:15:52.917 "data_offset": 2048, 00:15:52.917 "data_size": 63488 00:15:52.917 }, 00:15:52.917 { 00:15:52.917 "name": "BaseBdev3", 00:15:52.917 "uuid": "a25d8167-7005-5385-99be-4baebfae0640", 00:15:52.917 "is_configured": true, 00:15:52.917 "data_offset": 2048, 00:15:52.917 "data_size": 63488 00:15:52.917 }, 00:15:52.917 { 00:15:52.917 "name": "BaseBdev4", 00:15:52.917 "uuid": "a5202ed5-68ac-53fa-a361-33ad99a0d1ec", 00:15:52.918 "is_configured": true, 00:15:52.918 "data_offset": 2048, 00:15:52.918 "data_size": 63488 00:15:52.918 } 00:15:52.918 ] 00:15:52.918 }' 00:15:52.918 04:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:52.918 04:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:52.918 04:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:52.918 04:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:52.918 04:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 97284 00:15:52.918 04:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 97284 ']' 00:15:52.918 04:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 97284 00:15:52.918 04:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:52.918 04:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:52.918 04:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97284 00:15:52.918 killing process with pid 97284 00:15:52.918 Received shutdown signal, test time was about 60.000000 seconds 00:15:52.918 00:15:52.918 Latency(us) 00:15:52.918 [2024-12-13T04:31:52.933Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:52.918 [2024-12-13T04:31:52.933Z] =================================================================================================================== 00:15:52.918 [2024-12-13T04:31:52.933Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:52.918 04:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:52.918 04:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:52.918 04:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97284' 00:15:52.918 04:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 97284 00:15:52.918 [2024-12-13 04:31:52.896351] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:52.918 [2024-12-13 04:31:52.896433] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:52.918 [2024-12-13 04:31:52.896527] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:52.918 [2024-12-13 04:31:52.896537] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:15:52.918 04:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 97284 00:15:53.178 [2024-12-13 04:31:52.989952] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:53.438 04:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:53.438 00:15:53.438 real 0m25.589s 00:15:53.438 user 0m32.355s 00:15:53.438 sys 0m3.277s 00:15:53.438 04:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:53.438 04:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.438 ************************************ 00:15:53.438 END TEST raid5f_rebuild_test_sb 00:15:53.438 ************************************ 00:15:53.438 04:31:53 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:15:53.438 04:31:53 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:15:53.438 04:31:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:53.438 04:31:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:53.438 04:31:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:53.438 ************************************ 00:15:53.438 START TEST raid_state_function_test_sb_4k 00:15:53.438 ************************************ 00:15:53.438 04:31:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:15:53.438 04:31:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:53.438 04:31:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:15:53.438 04:31:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:53.438 04:31:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:53.438 04:31:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:53.438 04:31:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:53.438 04:31:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:53.438 04:31:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:53.438 04:31:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:53.438 04:31:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:53.438 04:31:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:53.438 04:31:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:53.438 04:31:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:53.438 04:31:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:53.438 04:31:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:53.438 04:31:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:53.438 04:31:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:53.438 04:31:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:53.438 04:31:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:53.438 04:31:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:53.438 04:31:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:53.438 04:31:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:53.438 04:31:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=98083 00:15:53.438 04:31:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:53.438 04:31:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 98083' 00:15:53.438 Process raid pid: 98083 00:15:53.438 04:31:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 98083 00:15:53.438 04:31:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 98083 ']' 00:15:53.438 04:31:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.438 04:31:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:53.438 04:31:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.438 04:31:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:53.438 04:31:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.698 [2024-12-13 04:31:53.483512] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:53.698 [2024-12-13 04:31:53.483734] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:53.698 [2024-12-13 04:31:53.640043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.698 [2024-12-13 04:31:53.678714] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.958 [2024-12-13 04:31:53.755891] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:53.958 [2024-12-13 04:31:53.755931] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:54.528 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:54.528 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:15:54.528 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:54.528 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.528 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.528 [2024-12-13 04:31:54.306995] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:54.528 [2024-12-13 04:31:54.307117] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:54.528 [2024-12-13 04:31:54.307143] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:54.528 [2024-12-13 04:31:54.307155] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:54.528 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.528 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:54.528 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:54.528 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:54.528 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:54.528 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:54.528 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:54.528 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.528 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.528 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.528 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.529 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.529 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.529 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.529 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.529 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.529 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.529 "name": "Existed_Raid", 00:15:54.529 "uuid": "e97212b3-3f2a-4148-b943-d2878364fd8d", 00:15:54.529 "strip_size_kb": 0, 00:15:54.529 "state": "configuring", 00:15:54.529 "raid_level": "raid1", 00:15:54.529 "superblock": true, 00:15:54.529 "num_base_bdevs": 2, 00:15:54.529 "num_base_bdevs_discovered": 0, 00:15:54.529 "num_base_bdevs_operational": 2, 00:15:54.529 "base_bdevs_list": [ 00:15:54.529 { 00:15:54.529 "name": "BaseBdev1", 00:15:54.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.529 "is_configured": false, 00:15:54.529 "data_offset": 0, 00:15:54.529 "data_size": 0 00:15:54.529 }, 00:15:54.529 { 00:15:54.529 "name": "BaseBdev2", 00:15:54.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.529 "is_configured": false, 00:15:54.529 "data_offset": 0, 00:15:54.529 "data_size": 0 00:15:54.529 } 00:15:54.529 ] 00:15:54.529 }' 00:15:54.529 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.529 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.789 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:54.789 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.789 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.789 [2024-12-13 04:31:54.726229] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:54.789 [2024-12-13 04:31:54.726325] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:15:54.789 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.789 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:54.789 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.789 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.789 [2024-12-13 04:31:54.738215] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:54.789 [2024-12-13 04:31:54.738302] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:54.789 [2024-12-13 04:31:54.738328] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:54.789 [2024-12-13 04:31:54.738364] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:54.789 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.789 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:15:54.789 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.789 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.789 [2024-12-13 04:31:54.765185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:54.789 BaseBdev1 00:15:54.789 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.789 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:54.789 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:54.789 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:54.789 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:15:54.789 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:54.789 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:54.789 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:54.789 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.789 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.789 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.789 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:54.789 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.789 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.789 [ 00:15:54.789 { 00:15:54.789 "name": "BaseBdev1", 00:15:54.789 "aliases": [ 00:15:54.789 "225cd1c0-2069-4cf3-8e21-8037a5fe7222" 00:15:54.789 ], 00:15:54.789 "product_name": "Malloc disk", 00:15:54.789 "block_size": 4096, 00:15:54.789 "num_blocks": 8192, 00:15:54.789 "uuid": "225cd1c0-2069-4cf3-8e21-8037a5fe7222", 00:15:54.789 "assigned_rate_limits": { 00:15:54.789 "rw_ios_per_sec": 0, 00:15:54.789 "rw_mbytes_per_sec": 0, 00:15:54.789 "r_mbytes_per_sec": 0, 00:15:54.789 "w_mbytes_per_sec": 0 00:15:54.789 }, 00:15:54.789 "claimed": true, 00:15:54.789 "claim_type": "exclusive_write", 00:15:54.789 "zoned": false, 00:15:54.789 "supported_io_types": { 00:15:54.789 "read": true, 00:15:54.789 "write": true, 00:15:54.789 "unmap": true, 00:15:54.789 "flush": true, 00:15:54.789 "reset": true, 00:15:54.789 "nvme_admin": false, 00:15:54.789 "nvme_io": false, 00:15:54.789 "nvme_io_md": false, 00:15:54.789 "write_zeroes": true, 00:15:54.789 "zcopy": true, 00:15:54.789 "get_zone_info": false, 00:15:54.789 "zone_management": false, 00:15:54.789 "zone_append": false, 00:15:54.789 "compare": false, 00:15:54.789 "compare_and_write": false, 00:15:54.789 "abort": true, 00:15:54.789 "seek_hole": false, 00:15:54.789 "seek_data": false, 00:15:54.789 "copy": true, 00:15:54.789 "nvme_iov_md": false 00:15:54.789 }, 00:15:54.789 "memory_domains": [ 00:15:54.789 { 00:15:54.789 "dma_device_id": "system", 00:15:54.789 "dma_device_type": 1 00:15:54.789 }, 00:15:54.789 { 00:15:54.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.789 "dma_device_type": 2 00:15:54.789 } 00:15:54.789 ], 00:15:54.789 "driver_specific": {} 00:15:54.789 } 00:15:54.789 ] 00:15:54.789 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.789 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:15:55.050 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:55.050 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:55.050 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:55.050 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:55.050 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:55.050 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:55.050 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.050 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.050 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.050 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.050 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.050 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.050 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.050 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.050 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.050 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.050 "name": "Existed_Raid", 00:15:55.050 "uuid": "7856c867-643c-45de-8ec0-f0d7a531ddf7", 00:15:55.050 "strip_size_kb": 0, 00:15:55.050 "state": "configuring", 00:15:55.050 "raid_level": "raid1", 00:15:55.050 "superblock": true, 00:15:55.050 "num_base_bdevs": 2, 00:15:55.050 "num_base_bdevs_discovered": 1, 00:15:55.050 "num_base_bdevs_operational": 2, 00:15:55.050 "base_bdevs_list": [ 00:15:55.050 { 00:15:55.050 "name": "BaseBdev1", 00:15:55.050 "uuid": "225cd1c0-2069-4cf3-8e21-8037a5fe7222", 00:15:55.050 "is_configured": true, 00:15:55.050 "data_offset": 256, 00:15:55.050 "data_size": 7936 00:15:55.050 }, 00:15:55.050 { 00:15:55.050 "name": "BaseBdev2", 00:15:55.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.050 "is_configured": false, 00:15:55.050 "data_offset": 0, 00:15:55.050 "data_size": 0 00:15:55.050 } 00:15:55.050 ] 00:15:55.050 }' 00:15:55.050 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.050 04:31:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.315 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:55.315 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.315 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.315 [2024-12-13 04:31:55.232531] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:55.315 [2024-12-13 04:31:55.232622] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:15:55.315 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.315 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:55.315 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.315 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.315 [2024-12-13 04:31:55.244539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:55.315 [2024-12-13 04:31:55.246584] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:55.315 [2024-12-13 04:31:55.246663] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:55.315 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.315 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:55.315 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:55.315 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:55.315 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:55.315 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:55.315 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:55.315 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:55.315 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:55.315 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.315 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.315 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.315 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.315 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.315 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.315 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.315 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.315 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.315 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.315 "name": "Existed_Raid", 00:15:55.315 "uuid": "aacadfbb-edd4-44a6-b028-5cc6ff7e5108", 00:15:55.315 "strip_size_kb": 0, 00:15:55.315 "state": "configuring", 00:15:55.315 "raid_level": "raid1", 00:15:55.315 "superblock": true, 00:15:55.315 "num_base_bdevs": 2, 00:15:55.315 "num_base_bdevs_discovered": 1, 00:15:55.315 "num_base_bdevs_operational": 2, 00:15:55.315 "base_bdevs_list": [ 00:15:55.315 { 00:15:55.315 "name": "BaseBdev1", 00:15:55.315 "uuid": "225cd1c0-2069-4cf3-8e21-8037a5fe7222", 00:15:55.315 "is_configured": true, 00:15:55.315 "data_offset": 256, 00:15:55.315 "data_size": 7936 00:15:55.315 }, 00:15:55.315 { 00:15:55.315 "name": "BaseBdev2", 00:15:55.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.315 "is_configured": false, 00:15:55.315 "data_offset": 0, 00:15:55.315 "data_size": 0 00:15:55.315 } 00:15:55.315 ] 00:15:55.315 }' 00:15:55.315 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.315 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.884 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:15:55.884 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.884 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.884 [2024-12-13 04:31:55.664698] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:55.884 [2024-12-13 04:31:55.664985] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:15:55.884 [2024-12-13 04:31:55.665034] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:55.884 [2024-12-13 04:31:55.665388] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:15:55.884 BaseBdev2 00:15:55.884 [2024-12-13 04:31:55.665621] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:15:55.884 [2024-12-13 04:31:55.665681] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:15:55.884 [2024-12-13 04:31:55.665843] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.884 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.884 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:55.884 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:55.884 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:55.884 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:15:55.884 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:55.884 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:55.884 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:55.884 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.884 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.884 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.884 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:55.884 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.884 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.884 [ 00:15:55.884 { 00:15:55.884 "name": "BaseBdev2", 00:15:55.884 "aliases": [ 00:15:55.884 "97ef28e3-a138-4c83-9262-970152981170" 00:15:55.884 ], 00:15:55.884 "product_name": "Malloc disk", 00:15:55.884 "block_size": 4096, 00:15:55.884 "num_blocks": 8192, 00:15:55.884 "uuid": "97ef28e3-a138-4c83-9262-970152981170", 00:15:55.884 "assigned_rate_limits": { 00:15:55.884 "rw_ios_per_sec": 0, 00:15:55.884 "rw_mbytes_per_sec": 0, 00:15:55.884 "r_mbytes_per_sec": 0, 00:15:55.884 "w_mbytes_per_sec": 0 00:15:55.884 }, 00:15:55.884 "claimed": true, 00:15:55.884 "claim_type": "exclusive_write", 00:15:55.884 "zoned": false, 00:15:55.884 "supported_io_types": { 00:15:55.884 "read": true, 00:15:55.884 "write": true, 00:15:55.884 "unmap": true, 00:15:55.884 "flush": true, 00:15:55.884 "reset": true, 00:15:55.884 "nvme_admin": false, 00:15:55.884 "nvme_io": false, 00:15:55.884 "nvme_io_md": false, 00:15:55.884 "write_zeroes": true, 00:15:55.884 "zcopy": true, 00:15:55.884 "get_zone_info": false, 00:15:55.884 "zone_management": false, 00:15:55.884 "zone_append": false, 00:15:55.884 "compare": false, 00:15:55.884 "compare_and_write": false, 00:15:55.884 "abort": true, 00:15:55.884 "seek_hole": false, 00:15:55.884 "seek_data": false, 00:15:55.884 "copy": true, 00:15:55.884 "nvme_iov_md": false 00:15:55.884 }, 00:15:55.884 "memory_domains": [ 00:15:55.884 { 00:15:55.884 "dma_device_id": "system", 00:15:55.884 "dma_device_type": 1 00:15:55.884 }, 00:15:55.884 { 00:15:55.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.884 "dma_device_type": 2 00:15:55.884 } 00:15:55.884 ], 00:15:55.884 "driver_specific": {} 00:15:55.884 } 00:15:55.884 ] 00:15:55.884 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.884 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:15:55.884 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:55.884 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:55.884 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:55.884 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:55.885 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:55.885 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:55.885 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:55.885 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:55.885 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.885 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.885 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.885 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.885 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.885 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.885 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.885 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.885 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.885 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.885 "name": "Existed_Raid", 00:15:55.885 "uuid": "aacadfbb-edd4-44a6-b028-5cc6ff7e5108", 00:15:55.885 "strip_size_kb": 0, 00:15:55.885 "state": "online", 00:15:55.885 "raid_level": "raid1", 00:15:55.885 "superblock": true, 00:15:55.885 "num_base_bdevs": 2, 00:15:55.885 "num_base_bdevs_discovered": 2, 00:15:55.885 "num_base_bdevs_operational": 2, 00:15:55.885 "base_bdevs_list": [ 00:15:55.885 { 00:15:55.885 "name": "BaseBdev1", 00:15:55.885 "uuid": "225cd1c0-2069-4cf3-8e21-8037a5fe7222", 00:15:55.885 "is_configured": true, 00:15:55.885 "data_offset": 256, 00:15:55.885 "data_size": 7936 00:15:55.885 }, 00:15:55.885 { 00:15:55.885 "name": "BaseBdev2", 00:15:55.885 "uuid": "97ef28e3-a138-4c83-9262-970152981170", 00:15:55.885 "is_configured": true, 00:15:55.885 "data_offset": 256, 00:15:55.885 "data_size": 7936 00:15:55.885 } 00:15:55.885 ] 00:15:55.885 }' 00:15:55.885 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.885 04:31:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:56.455 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:56.455 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:56.455 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:56.455 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:56.455 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:15:56.455 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:56.455 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:56.455 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:56.455 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.455 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:56.455 [2024-12-13 04:31:56.172633] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:56.455 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.455 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:56.455 "name": "Existed_Raid", 00:15:56.455 "aliases": [ 00:15:56.455 "aacadfbb-edd4-44a6-b028-5cc6ff7e5108" 00:15:56.455 ], 00:15:56.455 "product_name": "Raid Volume", 00:15:56.455 "block_size": 4096, 00:15:56.455 "num_blocks": 7936, 00:15:56.455 "uuid": "aacadfbb-edd4-44a6-b028-5cc6ff7e5108", 00:15:56.455 "assigned_rate_limits": { 00:15:56.455 "rw_ios_per_sec": 0, 00:15:56.455 "rw_mbytes_per_sec": 0, 00:15:56.455 "r_mbytes_per_sec": 0, 00:15:56.455 "w_mbytes_per_sec": 0 00:15:56.455 }, 00:15:56.455 "claimed": false, 00:15:56.455 "zoned": false, 00:15:56.455 "supported_io_types": { 00:15:56.455 "read": true, 00:15:56.455 "write": true, 00:15:56.455 "unmap": false, 00:15:56.455 "flush": false, 00:15:56.455 "reset": true, 00:15:56.455 "nvme_admin": false, 00:15:56.455 "nvme_io": false, 00:15:56.455 "nvme_io_md": false, 00:15:56.455 "write_zeroes": true, 00:15:56.455 "zcopy": false, 00:15:56.455 "get_zone_info": false, 00:15:56.455 "zone_management": false, 00:15:56.455 "zone_append": false, 00:15:56.455 "compare": false, 00:15:56.455 "compare_and_write": false, 00:15:56.455 "abort": false, 00:15:56.455 "seek_hole": false, 00:15:56.455 "seek_data": false, 00:15:56.455 "copy": false, 00:15:56.455 "nvme_iov_md": false 00:15:56.455 }, 00:15:56.455 "memory_domains": [ 00:15:56.455 { 00:15:56.455 "dma_device_id": "system", 00:15:56.455 "dma_device_type": 1 00:15:56.455 }, 00:15:56.455 { 00:15:56.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.455 "dma_device_type": 2 00:15:56.455 }, 00:15:56.455 { 00:15:56.455 "dma_device_id": "system", 00:15:56.455 "dma_device_type": 1 00:15:56.455 }, 00:15:56.455 { 00:15:56.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.455 "dma_device_type": 2 00:15:56.455 } 00:15:56.455 ], 00:15:56.455 "driver_specific": { 00:15:56.455 "raid": { 00:15:56.455 "uuid": "aacadfbb-edd4-44a6-b028-5cc6ff7e5108", 00:15:56.455 "strip_size_kb": 0, 00:15:56.455 "state": "online", 00:15:56.455 "raid_level": "raid1", 00:15:56.455 "superblock": true, 00:15:56.455 "num_base_bdevs": 2, 00:15:56.455 "num_base_bdevs_discovered": 2, 00:15:56.455 "num_base_bdevs_operational": 2, 00:15:56.455 "base_bdevs_list": [ 00:15:56.455 { 00:15:56.455 "name": "BaseBdev1", 00:15:56.455 "uuid": "225cd1c0-2069-4cf3-8e21-8037a5fe7222", 00:15:56.455 "is_configured": true, 00:15:56.455 "data_offset": 256, 00:15:56.455 "data_size": 7936 00:15:56.455 }, 00:15:56.455 { 00:15:56.455 "name": "BaseBdev2", 00:15:56.455 "uuid": "97ef28e3-a138-4c83-9262-970152981170", 00:15:56.455 "is_configured": true, 00:15:56.455 "data_offset": 256, 00:15:56.455 "data_size": 7936 00:15:56.455 } 00:15:56.455 ] 00:15:56.455 } 00:15:56.455 } 00:15:56.455 }' 00:15:56.455 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:56.455 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:56.455 BaseBdev2' 00:15:56.455 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:56.455 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:15:56.455 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:56.455 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:56.455 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:56.455 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.455 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:56.455 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.455 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:56.455 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:56.455 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:56.455 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:56.455 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:56.455 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.456 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:56.456 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.456 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:56.456 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:56.456 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:56.456 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.456 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:56.456 [2024-12-13 04:31:56.388060] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:56.456 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.456 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:56.456 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:56.456 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:56.456 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:15:56.456 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:56.456 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:56.456 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:56.456 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:56.456 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:56.456 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:56.456 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:56.456 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.456 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.456 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.456 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.456 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.456 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.456 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.456 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:56.456 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.456 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.456 "name": "Existed_Raid", 00:15:56.456 "uuid": "aacadfbb-edd4-44a6-b028-5cc6ff7e5108", 00:15:56.456 "strip_size_kb": 0, 00:15:56.456 "state": "online", 00:15:56.456 "raid_level": "raid1", 00:15:56.456 "superblock": true, 00:15:56.456 "num_base_bdevs": 2, 00:15:56.456 "num_base_bdevs_discovered": 1, 00:15:56.456 "num_base_bdevs_operational": 1, 00:15:56.456 "base_bdevs_list": [ 00:15:56.456 { 00:15:56.456 "name": null, 00:15:56.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.456 "is_configured": false, 00:15:56.456 "data_offset": 0, 00:15:56.456 "data_size": 7936 00:15:56.456 }, 00:15:56.456 { 00:15:56.456 "name": "BaseBdev2", 00:15:56.456 "uuid": "97ef28e3-a138-4c83-9262-970152981170", 00:15:56.456 "is_configured": true, 00:15:56.456 "data_offset": 256, 00:15:56.456 "data_size": 7936 00:15:56.456 } 00:15:56.456 ] 00:15:56.456 }' 00:15:56.456 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.456 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:57.026 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:57.026 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:57.026 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.026 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:57.026 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.026 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:57.026 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.026 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:57.026 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:57.026 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:57.026 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.026 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:57.026 [2024-12-13 04:31:56.924379] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:57.026 [2024-12-13 04:31:56.924566] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:57.026 [2024-12-13 04:31:56.945463] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:57.026 [2024-12-13 04:31:56.945520] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:57.026 [2024-12-13 04:31:56.945533] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:15:57.026 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.026 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:57.026 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:57.026 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:57.026 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.026 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.026 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:57.026 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.026 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:57.026 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:57.026 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:15:57.026 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 98083 00:15:57.026 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 98083 ']' 00:15:57.026 04:31:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 98083 00:15:57.026 04:31:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:15:57.026 04:31:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:57.026 04:31:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98083 00:15:57.287 04:31:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:57.287 04:31:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:57.287 04:31:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98083' 00:15:57.287 killing process with pid 98083 00:15:57.287 04:31:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 98083 00:15:57.287 [2024-12-13 04:31:57.043638] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:57.287 04:31:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 98083 00:15:57.287 [2024-12-13 04:31:57.045201] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:57.547 04:31:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:15:57.547 00:15:57.547 real 0m3.988s 00:15:57.547 user 0m6.082s 00:15:57.547 sys 0m0.927s 00:15:57.547 04:31:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:57.547 ************************************ 00:15:57.547 END TEST raid_state_function_test_sb_4k 00:15:57.547 ************************************ 00:15:57.547 04:31:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:57.547 04:31:57 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:15:57.547 04:31:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:57.548 04:31:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:57.548 04:31:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:57.548 ************************************ 00:15:57.548 START TEST raid_superblock_test_4k 00:15:57.548 ************************************ 00:15:57.548 04:31:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:15:57.548 04:31:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:15:57.548 04:31:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:15:57.548 04:31:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:57.548 04:31:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:57.548 04:31:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:57.548 04:31:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:57.548 04:31:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:57.548 04:31:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:57.548 04:31:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:57.548 04:31:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:57.548 04:31:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:57.548 04:31:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:57.548 04:31:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:57.548 04:31:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:15:57.548 04:31:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:15:57.548 04:31:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=98322 00:15:57.548 04:31:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:57.548 04:31:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 98322 00:15:57.548 04:31:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 98322 ']' 00:15:57.548 04:31:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.548 04:31:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:57.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.548 04:31:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.548 04:31:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:57.548 04:31:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:57.548 [2024-12-13 04:31:57.542467] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:57.548 [2024-12-13 04:31:57.542677] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98322 ] 00:15:57.808 [2024-12-13 04:31:57.696962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.808 [2024-12-13 04:31:57.734906] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.808 [2024-12-13 04:31:57.812341] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:57.808 [2024-12-13 04:31:57.812490] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:58.379 04:31:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:58.379 04:31:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:15:58.379 04:31:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:58.379 04:31:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:58.379 04:31:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:58.379 04:31:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:58.379 04:31:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:58.379 04:31:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:58.379 04:31:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:58.379 04:31:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:58.379 04:31:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:15:58.379 04:31:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.379 04:31:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.640 malloc1 00:15:58.640 04:31:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.640 04:31:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:58.640 04:31:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.640 04:31:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.640 [2024-12-13 04:31:58.410614] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:58.640 [2024-12-13 04:31:58.410730] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.640 [2024-12-13 04:31:58.410769] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:58.640 [2024-12-13 04:31:58.410821] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.640 [2024-12-13 04:31:58.413213] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.640 [2024-12-13 04:31:58.413295] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:58.640 pt1 00:15:58.640 04:31:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.640 04:31:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:58.640 04:31:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:58.640 04:31:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:58.640 04:31:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:58.640 04:31:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:58.640 04:31:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:58.640 04:31:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:58.640 04:31:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:58.640 04:31:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:15:58.640 04:31:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.640 04:31:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.640 malloc2 00:15:58.640 04:31:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.640 04:31:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:58.640 04:31:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.640 04:31:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.640 [2024-12-13 04:31:58.449147] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:58.640 [2024-12-13 04:31:58.449239] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.640 [2024-12-13 04:31:58.449275] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:58.640 [2024-12-13 04:31:58.449303] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.640 [2024-12-13 04:31:58.451541] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.640 [2024-12-13 04:31:58.451622] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:58.640 pt2 00:15:58.640 04:31:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.640 04:31:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:58.640 04:31:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:58.640 04:31:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:15:58.640 04:31:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.640 04:31:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.640 [2024-12-13 04:31:58.461172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:58.640 [2024-12-13 04:31:58.463120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:58.640 [2024-12-13 04:31:58.463278] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:15:58.640 [2024-12-13 04:31:58.463297] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:58.640 [2024-12-13 04:31:58.463594] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:15:58.640 [2024-12-13 04:31:58.463753] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:15:58.640 [2024-12-13 04:31:58.463764] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:15:58.640 [2024-12-13 04:31:58.463915] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:58.640 04:31:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.640 04:31:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:58.641 04:31:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:58.641 04:31:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:58.641 04:31:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:58.641 04:31:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:58.641 04:31:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:58.641 04:31:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.641 04:31:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.641 04:31:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.641 04:31:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.641 04:31:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.641 04:31:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.641 04:31:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.641 04:31:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.641 04:31:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.641 04:31:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.641 "name": "raid_bdev1", 00:15:58.641 "uuid": "e3a6720c-d4e6-4509-a6b1-5192e3cd2941", 00:15:58.641 "strip_size_kb": 0, 00:15:58.641 "state": "online", 00:15:58.641 "raid_level": "raid1", 00:15:58.641 "superblock": true, 00:15:58.641 "num_base_bdevs": 2, 00:15:58.641 "num_base_bdevs_discovered": 2, 00:15:58.641 "num_base_bdevs_operational": 2, 00:15:58.641 "base_bdevs_list": [ 00:15:58.641 { 00:15:58.641 "name": "pt1", 00:15:58.641 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:58.641 "is_configured": true, 00:15:58.641 "data_offset": 256, 00:15:58.641 "data_size": 7936 00:15:58.641 }, 00:15:58.641 { 00:15:58.641 "name": "pt2", 00:15:58.641 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:58.641 "is_configured": true, 00:15:58.641 "data_offset": 256, 00:15:58.641 "data_size": 7936 00:15:58.641 } 00:15:58.641 ] 00:15:58.641 }' 00:15:58.641 04:31:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.641 04:31:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.901 04:31:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:58.901 04:31:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:58.901 04:31:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:58.901 04:31:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:58.901 04:31:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:15:58.901 04:31:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:58.901 04:31:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:58.901 04:31:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:58.901 04:31:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.901 04:31:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.162 [2024-12-13 04:31:58.920877] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:59.162 04:31:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.162 04:31:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:59.162 "name": "raid_bdev1", 00:15:59.162 "aliases": [ 00:15:59.162 "e3a6720c-d4e6-4509-a6b1-5192e3cd2941" 00:15:59.162 ], 00:15:59.162 "product_name": "Raid Volume", 00:15:59.162 "block_size": 4096, 00:15:59.162 "num_blocks": 7936, 00:15:59.162 "uuid": "e3a6720c-d4e6-4509-a6b1-5192e3cd2941", 00:15:59.162 "assigned_rate_limits": { 00:15:59.162 "rw_ios_per_sec": 0, 00:15:59.162 "rw_mbytes_per_sec": 0, 00:15:59.162 "r_mbytes_per_sec": 0, 00:15:59.162 "w_mbytes_per_sec": 0 00:15:59.162 }, 00:15:59.162 "claimed": false, 00:15:59.162 "zoned": false, 00:15:59.162 "supported_io_types": { 00:15:59.162 "read": true, 00:15:59.162 "write": true, 00:15:59.162 "unmap": false, 00:15:59.162 "flush": false, 00:15:59.162 "reset": true, 00:15:59.162 "nvme_admin": false, 00:15:59.162 "nvme_io": false, 00:15:59.162 "nvme_io_md": false, 00:15:59.162 "write_zeroes": true, 00:15:59.162 "zcopy": false, 00:15:59.162 "get_zone_info": false, 00:15:59.162 "zone_management": false, 00:15:59.162 "zone_append": false, 00:15:59.162 "compare": false, 00:15:59.162 "compare_and_write": false, 00:15:59.162 "abort": false, 00:15:59.162 "seek_hole": false, 00:15:59.162 "seek_data": false, 00:15:59.162 "copy": false, 00:15:59.162 "nvme_iov_md": false 00:15:59.162 }, 00:15:59.162 "memory_domains": [ 00:15:59.162 { 00:15:59.162 "dma_device_id": "system", 00:15:59.162 "dma_device_type": 1 00:15:59.162 }, 00:15:59.162 { 00:15:59.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.162 "dma_device_type": 2 00:15:59.162 }, 00:15:59.162 { 00:15:59.162 "dma_device_id": "system", 00:15:59.162 "dma_device_type": 1 00:15:59.162 }, 00:15:59.162 { 00:15:59.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.162 "dma_device_type": 2 00:15:59.162 } 00:15:59.162 ], 00:15:59.162 "driver_specific": { 00:15:59.162 "raid": { 00:15:59.162 "uuid": "e3a6720c-d4e6-4509-a6b1-5192e3cd2941", 00:15:59.162 "strip_size_kb": 0, 00:15:59.162 "state": "online", 00:15:59.162 "raid_level": "raid1", 00:15:59.162 "superblock": true, 00:15:59.162 "num_base_bdevs": 2, 00:15:59.162 "num_base_bdevs_discovered": 2, 00:15:59.162 "num_base_bdevs_operational": 2, 00:15:59.162 "base_bdevs_list": [ 00:15:59.162 { 00:15:59.162 "name": "pt1", 00:15:59.162 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:59.162 "is_configured": true, 00:15:59.162 "data_offset": 256, 00:15:59.162 "data_size": 7936 00:15:59.162 }, 00:15:59.162 { 00:15:59.162 "name": "pt2", 00:15:59.162 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:59.162 "is_configured": true, 00:15:59.162 "data_offset": 256, 00:15:59.162 "data_size": 7936 00:15:59.162 } 00:15:59.162 ] 00:15:59.162 } 00:15:59.162 } 00:15:59.162 }' 00:15:59.162 04:31:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:59.162 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:59.162 pt2' 00:15:59.162 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:59.162 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:15:59.162 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:59.162 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:59.162 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.162 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.162 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:59.162 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.162 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:59.162 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:59.162 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:59.162 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:59.162 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:59.162 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.162 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.162 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.162 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:59.162 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:59.162 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:59.162 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:59.162 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.162 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.162 [2024-12-13 04:31:59.152580] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:59.162 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.423 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e3a6720c-d4e6-4509-a6b1-5192e3cd2941 00:15:59.423 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z e3a6720c-d4e6-4509-a6b1-5192e3cd2941 ']' 00:15:59.423 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:59.423 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.423 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.423 [2024-12-13 04:31:59.200273] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:59.423 [2024-12-13 04:31:59.200340] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:59.423 [2024-12-13 04:31:59.200471] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:59.423 [2024-12-13 04:31:59.200532] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:59.423 [2024-12-13 04:31:59.200541] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:15:59.423 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.423 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:59.423 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.423 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.423 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.423 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.423 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:59.423 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:59.423 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:59.423 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:59.423 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.423 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.423 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.423 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:59.423 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:59.423 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.423 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.423 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.423 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:59.423 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.423 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.423 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:59.423 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.423 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:59.423 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:59.423 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:15:59.423 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:59.423 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:59.423 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:59.423 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:59.423 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:59.423 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:59.423 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.423 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.423 [2024-12-13 04:31:59.328049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:59.423 [2024-12-13 04:31:59.330236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:59.423 [2024-12-13 04:31:59.330374] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:59.423 [2024-12-13 04:31:59.330453] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:59.423 [2024-12-13 04:31:59.330470] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:59.423 [2024-12-13 04:31:59.330478] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:15:59.423 request: 00:15:59.423 { 00:15:59.423 "name": "raid_bdev1", 00:15:59.423 "raid_level": "raid1", 00:15:59.423 "base_bdevs": [ 00:15:59.423 "malloc1", 00:15:59.423 "malloc2" 00:15:59.423 ], 00:15:59.423 "superblock": false, 00:15:59.423 "method": "bdev_raid_create", 00:15:59.423 "req_id": 1 00:15:59.423 } 00:15:59.423 Got JSON-RPC error response 00:15:59.423 response: 00:15:59.423 { 00:15:59.423 "code": -17, 00:15:59.423 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:59.424 } 00:15:59.424 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:59.424 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:15:59.424 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:59.424 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:59.424 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:59.424 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.424 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:59.424 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.424 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.424 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.424 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:59.424 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:59.424 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:59.424 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.424 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.424 [2024-12-13 04:31:59.391950] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:59.424 [2024-12-13 04:31:59.392051] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.424 [2024-12-13 04:31:59.392088] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:59.424 [2024-12-13 04:31:59.392115] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.424 [2024-12-13 04:31:59.394485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.424 [2024-12-13 04:31:59.394546] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:59.424 [2024-12-13 04:31:59.394635] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:59.424 [2024-12-13 04:31:59.394688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:59.424 pt1 00:15:59.424 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.424 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:59.424 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.424 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:59.424 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:59.424 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:59.424 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:59.424 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.424 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.424 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.424 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.424 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.424 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.424 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.424 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.424 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.684 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.684 "name": "raid_bdev1", 00:15:59.684 "uuid": "e3a6720c-d4e6-4509-a6b1-5192e3cd2941", 00:15:59.684 "strip_size_kb": 0, 00:15:59.684 "state": "configuring", 00:15:59.684 "raid_level": "raid1", 00:15:59.684 "superblock": true, 00:15:59.684 "num_base_bdevs": 2, 00:15:59.684 "num_base_bdevs_discovered": 1, 00:15:59.684 "num_base_bdevs_operational": 2, 00:15:59.684 "base_bdevs_list": [ 00:15:59.684 { 00:15:59.684 "name": "pt1", 00:15:59.684 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:59.684 "is_configured": true, 00:15:59.684 "data_offset": 256, 00:15:59.684 "data_size": 7936 00:15:59.684 }, 00:15:59.684 { 00:15:59.684 "name": null, 00:15:59.684 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:59.684 "is_configured": false, 00:15:59.684 "data_offset": 256, 00:15:59.684 "data_size": 7936 00:15:59.684 } 00:15:59.684 ] 00:15:59.684 }' 00:15:59.684 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.684 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.944 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:15:59.944 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:59.944 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:59.944 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:59.944 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.944 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.944 [2024-12-13 04:31:59.847138] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:59.944 [2024-12-13 04:31:59.847231] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.944 [2024-12-13 04:31:59.847251] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:59.944 [2024-12-13 04:31:59.847259] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.944 [2024-12-13 04:31:59.847589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.944 [2024-12-13 04:31:59.847610] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:59.944 [2024-12-13 04:31:59.847660] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:59.944 [2024-12-13 04:31:59.847676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:59.944 [2024-12-13 04:31:59.847759] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:15:59.944 [2024-12-13 04:31:59.847767] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:59.944 [2024-12-13 04:31:59.848009] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:15:59.944 [2024-12-13 04:31:59.848112] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:15:59.944 [2024-12-13 04:31:59.848127] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:15:59.944 [2024-12-13 04:31:59.848211] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.944 pt2 00:15:59.945 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.945 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:59.945 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:59.945 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:59.945 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.945 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.945 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:59.945 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:59.945 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:59.945 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.945 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.945 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.945 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.945 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.945 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.945 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.945 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.945 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.945 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.945 "name": "raid_bdev1", 00:15:59.945 "uuid": "e3a6720c-d4e6-4509-a6b1-5192e3cd2941", 00:15:59.945 "strip_size_kb": 0, 00:15:59.945 "state": "online", 00:15:59.945 "raid_level": "raid1", 00:15:59.945 "superblock": true, 00:15:59.945 "num_base_bdevs": 2, 00:15:59.945 "num_base_bdevs_discovered": 2, 00:15:59.945 "num_base_bdevs_operational": 2, 00:15:59.945 "base_bdevs_list": [ 00:15:59.945 { 00:15:59.945 "name": "pt1", 00:15:59.945 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:59.945 "is_configured": true, 00:15:59.945 "data_offset": 256, 00:15:59.945 "data_size": 7936 00:15:59.945 }, 00:15:59.945 { 00:15:59.945 "name": "pt2", 00:15:59.945 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:59.945 "is_configured": true, 00:15:59.945 "data_offset": 256, 00:15:59.945 "data_size": 7936 00:15:59.945 } 00:15:59.945 ] 00:15:59.945 }' 00:15:59.945 04:31:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.945 04:31:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:00.205 04:32:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:00.205 04:32:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:00.205 04:32:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:00.205 04:32:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:00.205 04:32:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:00.205 04:32:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:00.466 04:32:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:00.466 04:32:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.466 04:32:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:00.466 04:32:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:00.466 [2024-12-13 04:32:00.226745] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:00.466 04:32:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.466 04:32:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:00.466 "name": "raid_bdev1", 00:16:00.466 "aliases": [ 00:16:00.466 "e3a6720c-d4e6-4509-a6b1-5192e3cd2941" 00:16:00.466 ], 00:16:00.466 "product_name": "Raid Volume", 00:16:00.466 "block_size": 4096, 00:16:00.466 "num_blocks": 7936, 00:16:00.466 "uuid": "e3a6720c-d4e6-4509-a6b1-5192e3cd2941", 00:16:00.466 "assigned_rate_limits": { 00:16:00.466 "rw_ios_per_sec": 0, 00:16:00.466 "rw_mbytes_per_sec": 0, 00:16:00.466 "r_mbytes_per_sec": 0, 00:16:00.466 "w_mbytes_per_sec": 0 00:16:00.466 }, 00:16:00.466 "claimed": false, 00:16:00.466 "zoned": false, 00:16:00.466 "supported_io_types": { 00:16:00.466 "read": true, 00:16:00.466 "write": true, 00:16:00.466 "unmap": false, 00:16:00.466 "flush": false, 00:16:00.466 "reset": true, 00:16:00.466 "nvme_admin": false, 00:16:00.466 "nvme_io": false, 00:16:00.466 "nvme_io_md": false, 00:16:00.466 "write_zeroes": true, 00:16:00.466 "zcopy": false, 00:16:00.466 "get_zone_info": false, 00:16:00.466 "zone_management": false, 00:16:00.466 "zone_append": false, 00:16:00.466 "compare": false, 00:16:00.466 "compare_and_write": false, 00:16:00.466 "abort": false, 00:16:00.466 "seek_hole": false, 00:16:00.466 "seek_data": false, 00:16:00.466 "copy": false, 00:16:00.466 "nvme_iov_md": false 00:16:00.466 }, 00:16:00.466 "memory_domains": [ 00:16:00.466 { 00:16:00.466 "dma_device_id": "system", 00:16:00.466 "dma_device_type": 1 00:16:00.466 }, 00:16:00.466 { 00:16:00.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.466 "dma_device_type": 2 00:16:00.466 }, 00:16:00.466 { 00:16:00.466 "dma_device_id": "system", 00:16:00.466 "dma_device_type": 1 00:16:00.466 }, 00:16:00.466 { 00:16:00.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.466 "dma_device_type": 2 00:16:00.466 } 00:16:00.466 ], 00:16:00.466 "driver_specific": { 00:16:00.466 "raid": { 00:16:00.466 "uuid": "e3a6720c-d4e6-4509-a6b1-5192e3cd2941", 00:16:00.466 "strip_size_kb": 0, 00:16:00.466 "state": "online", 00:16:00.466 "raid_level": "raid1", 00:16:00.466 "superblock": true, 00:16:00.466 "num_base_bdevs": 2, 00:16:00.466 "num_base_bdevs_discovered": 2, 00:16:00.466 "num_base_bdevs_operational": 2, 00:16:00.466 "base_bdevs_list": [ 00:16:00.466 { 00:16:00.466 "name": "pt1", 00:16:00.466 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:00.466 "is_configured": true, 00:16:00.466 "data_offset": 256, 00:16:00.466 "data_size": 7936 00:16:00.466 }, 00:16:00.466 { 00:16:00.466 "name": "pt2", 00:16:00.466 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:00.466 "is_configured": true, 00:16:00.466 "data_offset": 256, 00:16:00.466 "data_size": 7936 00:16:00.466 } 00:16:00.466 ] 00:16:00.466 } 00:16:00.466 } 00:16:00.466 }' 00:16:00.466 04:32:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:00.466 04:32:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:00.466 pt2' 00:16:00.466 04:32:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:00.466 04:32:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:00.466 04:32:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:00.466 04:32:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:00.466 04:32:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.466 04:32:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:00.466 04:32:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:00.466 04:32:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.466 04:32:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:00.466 04:32:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:00.466 04:32:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:00.466 04:32:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:00.466 04:32:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.466 04:32:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:00.466 04:32:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:00.466 04:32:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.466 04:32:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:00.466 04:32:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:00.466 04:32:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:00.466 04:32:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.466 04:32:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:00.466 04:32:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:00.726 [2024-12-13 04:32:00.482272] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:00.726 04:32:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.726 04:32:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' e3a6720c-d4e6-4509-a6b1-5192e3cd2941 '!=' e3a6720c-d4e6-4509-a6b1-5192e3cd2941 ']' 00:16:00.726 04:32:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:00.726 04:32:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:00.726 04:32:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:16:00.726 04:32:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:00.726 04:32:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.726 04:32:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:00.726 [2024-12-13 04:32:00.525989] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:00.726 04:32:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.726 04:32:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:00.726 04:32:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:00.727 04:32:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:00.727 04:32:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:00.727 04:32:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:00.727 04:32:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:00.727 04:32:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.727 04:32:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.727 04:32:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.727 04:32:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.727 04:32:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.727 04:32:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.727 04:32:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:00.727 04:32:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.727 04:32:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.727 04:32:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.727 "name": "raid_bdev1", 00:16:00.727 "uuid": "e3a6720c-d4e6-4509-a6b1-5192e3cd2941", 00:16:00.727 "strip_size_kb": 0, 00:16:00.727 "state": "online", 00:16:00.727 "raid_level": "raid1", 00:16:00.727 "superblock": true, 00:16:00.727 "num_base_bdevs": 2, 00:16:00.727 "num_base_bdevs_discovered": 1, 00:16:00.727 "num_base_bdevs_operational": 1, 00:16:00.727 "base_bdevs_list": [ 00:16:00.727 { 00:16:00.727 "name": null, 00:16:00.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.727 "is_configured": false, 00:16:00.727 "data_offset": 0, 00:16:00.727 "data_size": 7936 00:16:00.727 }, 00:16:00.727 { 00:16:00.727 "name": "pt2", 00:16:00.727 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:00.727 "is_configured": true, 00:16:00.727 "data_offset": 256, 00:16:00.727 "data_size": 7936 00:16:00.727 } 00:16:00.727 ] 00:16:00.727 }' 00:16:00.727 04:32:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.727 04:32:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:00.986 04:32:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:00.986 04:32:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.986 04:32:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:00.986 [2024-12-13 04:32:01.001137] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:00.986 [2024-12-13 04:32:01.001203] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:00.986 [2024-12-13 04:32:01.001285] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:00.986 [2024-12-13 04:32:01.001335] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:00.986 [2024-12-13 04:32:01.001365] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:16:01.247 04:32:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.247 04:32:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.247 04:32:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:01.247 04:32:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.247 04:32:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:01.247 04:32:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.247 04:32:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:01.247 04:32:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:01.247 04:32:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:01.247 04:32:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:01.247 04:32:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:01.247 04:32:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.247 04:32:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:01.247 04:32:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.247 04:32:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:01.247 04:32:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:01.247 04:32:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:01.247 04:32:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:01.247 04:32:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:16:01.247 04:32:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:01.247 04:32:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.247 04:32:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:01.247 [2024-12-13 04:32:01.073017] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:01.247 [2024-12-13 04:32:01.073056] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.247 [2024-12-13 04:32:01.073071] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:16:01.247 [2024-12-13 04:32:01.073078] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.247 [2024-12-13 04:32:01.075487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.247 [2024-12-13 04:32:01.075549] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:01.247 [2024-12-13 04:32:01.075632] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:01.247 [2024-12-13 04:32:01.075689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:01.247 [2024-12-13 04:32:01.075798] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:16:01.247 [2024-12-13 04:32:01.075836] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:01.247 [2024-12-13 04:32:01.076069] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:16:01.247 [2024-12-13 04:32:01.076211] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:16:01.247 [2024-12-13 04:32:01.076252] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:16:01.247 [2024-12-13 04:32:01.076373] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:01.247 pt2 00:16:01.247 04:32:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.247 04:32:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:01.247 04:32:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:01.247 04:32:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:01.247 04:32:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:01.247 04:32:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:01.247 04:32:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:01.247 04:32:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.247 04:32:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.247 04:32:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.247 04:32:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.247 04:32:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.247 04:32:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.247 04:32:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.247 04:32:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:01.247 04:32:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.247 04:32:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.247 "name": "raid_bdev1", 00:16:01.247 "uuid": "e3a6720c-d4e6-4509-a6b1-5192e3cd2941", 00:16:01.247 "strip_size_kb": 0, 00:16:01.247 "state": "online", 00:16:01.247 "raid_level": "raid1", 00:16:01.247 "superblock": true, 00:16:01.247 "num_base_bdevs": 2, 00:16:01.247 "num_base_bdevs_discovered": 1, 00:16:01.247 "num_base_bdevs_operational": 1, 00:16:01.247 "base_bdevs_list": [ 00:16:01.247 { 00:16:01.247 "name": null, 00:16:01.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.247 "is_configured": false, 00:16:01.247 "data_offset": 256, 00:16:01.247 "data_size": 7936 00:16:01.247 }, 00:16:01.247 { 00:16:01.247 "name": "pt2", 00:16:01.247 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:01.247 "is_configured": true, 00:16:01.247 "data_offset": 256, 00:16:01.247 "data_size": 7936 00:16:01.247 } 00:16:01.247 ] 00:16:01.247 }' 00:16:01.247 04:32:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.247 04:32:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:01.507 04:32:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:01.507 04:32:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.507 04:32:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:01.507 [2024-12-13 04:32:01.492526] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:01.507 [2024-12-13 04:32:01.492583] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:01.507 [2024-12-13 04:32:01.492663] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:01.507 [2024-12-13 04:32:01.492709] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:01.507 [2024-12-13 04:32:01.492741] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:16:01.507 04:32:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.507 04:32:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.507 04:32:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:01.507 04:32:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.507 04:32:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:01.507 04:32:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.768 04:32:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:01.768 04:32:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:01.768 04:32:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:16:01.768 04:32:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:01.768 04:32:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.768 04:32:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:01.768 [2024-12-13 04:32:01.552404] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:01.768 [2024-12-13 04:32:01.552470] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.768 [2024-12-13 04:32:01.552483] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:16:01.768 [2024-12-13 04:32:01.552496] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.768 [2024-12-13 04:32:01.554730] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.768 [2024-12-13 04:32:01.554765] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:01.768 [2024-12-13 04:32:01.554818] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:01.768 [2024-12-13 04:32:01.554857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:01.768 [2024-12-13 04:32:01.554931] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:01.768 [2024-12-13 04:32:01.554942] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:01.768 [2024-12-13 04:32:01.554964] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:16:01.768 [2024-12-13 04:32:01.555003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:01.768 [2024-12-13 04:32:01.555060] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:16:01.768 [2024-12-13 04:32:01.555070] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:01.768 [2024-12-13 04:32:01.555267] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:16:01.768 [2024-12-13 04:32:01.555364] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:16:01.768 [2024-12-13 04:32:01.555371] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:16:01.768 [2024-12-13 04:32:01.555503] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:01.768 pt1 00:16:01.768 04:32:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.768 04:32:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:16:01.768 04:32:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:01.768 04:32:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:01.768 04:32:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:01.768 04:32:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:01.768 04:32:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:01.768 04:32:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:01.768 04:32:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.769 04:32:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.769 04:32:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.769 04:32:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.769 04:32:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.769 04:32:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.769 04:32:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.769 04:32:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:01.769 04:32:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.769 04:32:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.769 "name": "raid_bdev1", 00:16:01.769 "uuid": "e3a6720c-d4e6-4509-a6b1-5192e3cd2941", 00:16:01.769 "strip_size_kb": 0, 00:16:01.769 "state": "online", 00:16:01.769 "raid_level": "raid1", 00:16:01.769 "superblock": true, 00:16:01.769 "num_base_bdevs": 2, 00:16:01.769 "num_base_bdevs_discovered": 1, 00:16:01.769 "num_base_bdevs_operational": 1, 00:16:01.769 "base_bdevs_list": [ 00:16:01.769 { 00:16:01.769 "name": null, 00:16:01.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.769 "is_configured": false, 00:16:01.769 "data_offset": 256, 00:16:01.769 "data_size": 7936 00:16:01.769 }, 00:16:01.769 { 00:16:01.769 "name": "pt2", 00:16:01.769 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:01.769 "is_configured": true, 00:16:01.769 "data_offset": 256, 00:16:01.769 "data_size": 7936 00:16:01.769 } 00:16:01.769 ] 00:16:01.769 }' 00:16:01.769 04:32:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.769 04:32:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:02.029 04:32:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:02.029 04:32:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:02.029 04:32:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.029 04:32:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:02.029 04:32:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.289 04:32:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:02.289 04:32:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:02.289 04:32:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.289 04:32:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:02.290 04:32:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:02.290 [2024-12-13 04:32:02.059730] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:02.290 04:32:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.290 04:32:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' e3a6720c-d4e6-4509-a6b1-5192e3cd2941 '!=' e3a6720c-d4e6-4509-a6b1-5192e3cd2941 ']' 00:16:02.290 04:32:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 98322 00:16:02.290 04:32:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 98322 ']' 00:16:02.290 04:32:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 98322 00:16:02.290 04:32:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:16:02.290 04:32:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:02.290 04:32:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98322 00:16:02.290 04:32:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:02.290 04:32:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:02.290 04:32:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98322' 00:16:02.290 killing process with pid 98322 00:16:02.290 04:32:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 98322 00:16:02.290 [2024-12-13 04:32:02.149402] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:02.290 [2024-12-13 04:32:02.149472] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:02.290 [2024-12-13 04:32:02.149506] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:02.290 [2024-12-13 04:32:02.149513] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:16:02.290 04:32:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 98322 00:16:02.290 [2024-12-13 04:32:02.191509] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:02.550 04:32:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:16:02.550 00:16:02.550 real 0m5.068s 00:16:02.550 user 0m8.112s 00:16:02.550 sys 0m1.125s 00:16:02.550 ************************************ 00:16:02.550 END TEST raid_superblock_test_4k 00:16:02.550 ************************************ 00:16:02.550 04:32:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:02.550 04:32:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:02.811 04:32:02 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:16:02.811 04:32:02 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:16:02.811 04:32:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:02.811 04:32:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:02.811 04:32:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:02.811 ************************************ 00:16:02.811 START TEST raid_rebuild_test_sb_4k 00:16:02.811 ************************************ 00:16:02.811 04:32:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:16:02.811 04:32:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:02.811 04:32:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:02.811 04:32:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:02.811 04:32:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:02.811 04:32:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:02.811 04:32:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:02.811 04:32:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:02.811 04:32:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:02.811 04:32:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:02.811 04:32:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:02.811 04:32:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:02.811 04:32:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:02.811 04:32:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:02.811 04:32:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:02.811 04:32:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:02.811 04:32:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:02.811 04:32:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:02.811 04:32:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:02.811 04:32:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:02.811 04:32:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:02.811 04:32:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:02.811 04:32:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:02.811 04:32:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:02.811 04:32:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:02.812 04:32:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=98639 00:16:02.812 04:32:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:02.812 04:32:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 98639 00:16:02.812 04:32:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 98639 ']' 00:16:02.812 04:32:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.812 04:32:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:02.812 04:32:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.812 04:32:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:02.812 04:32:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:02.812 [2024-12-13 04:32:02.703819] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:16:02.812 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:02.812 Zero copy mechanism will not be used. 00:16:02.812 [2024-12-13 04:32:02.703989] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98639 ] 00:16:03.079 [2024-12-13 04:32:02.835640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.079 [2024-12-13 04:32:02.874065] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.079 [2024-12-13 04:32:02.951142] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:03.079 [2024-12-13 04:32:02.951178] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:03.678 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:03.678 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:16:03.678 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:03.678 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:16:03.678 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.678 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.678 BaseBdev1_malloc 00:16:03.678 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.678 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:03.678 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.678 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.678 [2024-12-13 04:32:03.565034] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:03.678 [2024-12-13 04:32:03.565135] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.678 [2024-12-13 04:32:03.565170] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:16:03.678 [2024-12-13 04:32:03.565190] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.678 [2024-12-13 04:32:03.567646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.678 [2024-12-13 04:32:03.567679] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:03.678 BaseBdev1 00:16:03.678 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.678 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:03.678 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:16:03.678 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.678 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.678 BaseBdev2_malloc 00:16:03.678 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.678 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:03.678 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.678 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.678 [2024-12-13 04:32:03.599636] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:03.678 [2024-12-13 04:32:03.599686] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.678 [2024-12-13 04:32:03.599711] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:03.678 [2024-12-13 04:32:03.599719] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.678 [2024-12-13 04:32:03.602055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.678 [2024-12-13 04:32:03.602094] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:03.678 BaseBdev2 00:16:03.678 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.678 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:16:03.678 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.678 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.678 spare_malloc 00:16:03.678 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.678 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:03.678 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.678 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.678 spare_delay 00:16:03.678 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.678 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:03.678 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.678 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.678 [2024-12-13 04:32:03.646235] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:03.678 [2024-12-13 04:32:03.646282] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.678 [2024-12-13 04:32:03.646303] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:03.678 [2024-12-13 04:32:03.646311] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.678 [2024-12-13 04:32:03.648667] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.678 [2024-12-13 04:32:03.648699] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:03.678 spare 00:16:03.678 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.678 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:03.678 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.678 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.678 [2024-12-13 04:32:03.658268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:03.678 [2024-12-13 04:32:03.660511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:03.678 [2024-12-13 04:32:03.660663] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:16:03.678 [2024-12-13 04:32:03.660675] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:03.678 [2024-12-13 04:32:03.660959] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:16:03.678 [2024-12-13 04:32:03.661102] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:16:03.678 [2024-12-13 04:32:03.661115] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:16:03.678 [2024-12-13 04:32:03.661223] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:03.678 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.678 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:03.678 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:03.678 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:03.678 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:03.678 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:03.678 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:03.678 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.678 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.678 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.678 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.678 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.678 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.678 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.678 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.954 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.954 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.954 "name": "raid_bdev1", 00:16:03.954 "uuid": "1b63f30c-0468-4d34-bd03-b2d4d6bfd5b6", 00:16:03.954 "strip_size_kb": 0, 00:16:03.954 "state": "online", 00:16:03.954 "raid_level": "raid1", 00:16:03.954 "superblock": true, 00:16:03.954 "num_base_bdevs": 2, 00:16:03.954 "num_base_bdevs_discovered": 2, 00:16:03.954 "num_base_bdevs_operational": 2, 00:16:03.954 "base_bdevs_list": [ 00:16:03.954 { 00:16:03.954 "name": "BaseBdev1", 00:16:03.954 "uuid": "468d9121-cb60-5508-9c32-01310a121d49", 00:16:03.954 "is_configured": true, 00:16:03.954 "data_offset": 256, 00:16:03.954 "data_size": 7936 00:16:03.954 }, 00:16:03.954 { 00:16:03.954 "name": "BaseBdev2", 00:16:03.954 "uuid": "f21f1550-930d-5ce8-a926-bbe4e14480f2", 00:16:03.954 "is_configured": true, 00:16:03.954 "data_offset": 256, 00:16:03.954 "data_size": 7936 00:16:03.954 } 00:16:03.954 ] 00:16:03.954 }' 00:16:03.954 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.954 04:32:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:04.214 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:04.214 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:04.214 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.214 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:04.214 [2024-12-13 04:32:04.053789] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:04.214 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.214 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:16:04.214 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.214 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:04.214 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.214 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:04.214 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.214 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:16:04.214 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:04.214 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:04.214 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:04.214 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:04.214 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:04.214 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:04.214 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:04.214 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:04.214 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:04.214 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:16:04.214 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:04.214 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:04.214 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:04.474 [2024-12-13 04:32:04.313152] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:16:04.474 /dev/nbd0 00:16:04.474 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:04.474 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:04.474 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:04.474 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:16:04.474 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:04.474 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:04.474 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:04.474 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:16:04.474 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:04.474 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:04.474 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:04.474 1+0 records in 00:16:04.474 1+0 records out 00:16:04.474 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000477118 s, 8.6 MB/s 00:16:04.474 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:04.474 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:16:04.474 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:04.474 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:04.474 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:16:04.474 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:04.474 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:04.474 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:04.474 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:04.474 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:16:05.044 7936+0 records in 00:16:05.044 7936+0 records out 00:16:05.044 32505856 bytes (33 MB, 31 MiB) copied, 0.588245 s, 55.3 MB/s 00:16:05.044 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:05.044 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:05.044 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:05.044 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:05.044 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:16:05.044 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:05.044 04:32:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:05.303 [2024-12-13 04:32:05.193502] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.303 04:32:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:05.303 04:32:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:05.303 04:32:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:05.303 04:32:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:05.303 04:32:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:05.303 04:32:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:05.303 04:32:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:16:05.303 04:32:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:16:05.303 04:32:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:05.303 04:32:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.303 04:32:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:05.303 [2024-12-13 04:32:05.225162] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:05.303 04:32:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.303 04:32:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:05.303 04:32:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.303 04:32:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:05.303 04:32:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:05.303 04:32:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:05.303 04:32:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:05.303 04:32:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.303 04:32:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.303 04:32:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.303 04:32:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.303 04:32:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.303 04:32:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.303 04:32:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.303 04:32:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:05.303 04:32:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.303 04:32:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.303 "name": "raid_bdev1", 00:16:05.303 "uuid": "1b63f30c-0468-4d34-bd03-b2d4d6bfd5b6", 00:16:05.303 "strip_size_kb": 0, 00:16:05.303 "state": "online", 00:16:05.303 "raid_level": "raid1", 00:16:05.303 "superblock": true, 00:16:05.303 "num_base_bdevs": 2, 00:16:05.303 "num_base_bdevs_discovered": 1, 00:16:05.303 "num_base_bdevs_operational": 1, 00:16:05.303 "base_bdevs_list": [ 00:16:05.303 { 00:16:05.303 "name": null, 00:16:05.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.303 "is_configured": false, 00:16:05.303 "data_offset": 0, 00:16:05.303 "data_size": 7936 00:16:05.303 }, 00:16:05.303 { 00:16:05.303 "name": "BaseBdev2", 00:16:05.303 "uuid": "f21f1550-930d-5ce8-a926-bbe4e14480f2", 00:16:05.303 "is_configured": true, 00:16:05.303 "data_offset": 256, 00:16:05.303 "data_size": 7936 00:16:05.303 } 00:16:05.303 ] 00:16:05.303 }' 00:16:05.303 04:32:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.303 04:32:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:05.872 04:32:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:05.872 04:32:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.872 04:32:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:05.872 [2024-12-13 04:32:05.668521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:05.872 [2024-12-13 04:32:05.677110] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019c960 00:16:05.872 04:32:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.872 04:32:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:05.872 [2024-12-13 04:32:05.679298] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:06.812 04:32:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:06.812 04:32:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.812 04:32:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:06.812 04:32:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:06.812 04:32:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.812 04:32:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.812 04:32:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.812 04:32:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.812 04:32:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.812 04:32:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.812 04:32:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.812 "name": "raid_bdev1", 00:16:06.812 "uuid": "1b63f30c-0468-4d34-bd03-b2d4d6bfd5b6", 00:16:06.812 "strip_size_kb": 0, 00:16:06.812 "state": "online", 00:16:06.812 "raid_level": "raid1", 00:16:06.812 "superblock": true, 00:16:06.812 "num_base_bdevs": 2, 00:16:06.812 "num_base_bdevs_discovered": 2, 00:16:06.812 "num_base_bdevs_operational": 2, 00:16:06.812 "process": { 00:16:06.812 "type": "rebuild", 00:16:06.812 "target": "spare", 00:16:06.812 "progress": { 00:16:06.812 "blocks": 2560, 00:16:06.812 "percent": 32 00:16:06.812 } 00:16:06.812 }, 00:16:06.812 "base_bdevs_list": [ 00:16:06.812 { 00:16:06.812 "name": "spare", 00:16:06.812 "uuid": "3f04640e-c4e8-5e8c-8e43-0c139f6a62ba", 00:16:06.812 "is_configured": true, 00:16:06.812 "data_offset": 256, 00:16:06.812 "data_size": 7936 00:16:06.812 }, 00:16:06.812 { 00:16:06.812 "name": "BaseBdev2", 00:16:06.812 "uuid": "f21f1550-930d-5ce8-a926-bbe4e14480f2", 00:16:06.812 "is_configured": true, 00:16:06.812 "data_offset": 256, 00:16:06.812 "data_size": 7936 00:16:06.812 } 00:16:06.812 ] 00:16:06.812 }' 00:16:06.812 04:32:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.812 04:32:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:06.812 04:32:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:07.072 04:32:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:07.072 04:32:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:07.072 04:32:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.072 04:32:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:07.072 [2024-12-13 04:32:06.839564] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:07.072 [2024-12-13 04:32:06.887471] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:07.072 [2024-12-13 04:32:06.887569] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:07.072 [2024-12-13 04:32:06.887609] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:07.072 [2024-12-13 04:32:06.887639] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:07.072 04:32:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.072 04:32:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:07.072 04:32:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.072 04:32:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.072 04:32:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:07.072 04:32:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:07.072 04:32:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:07.072 04:32:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.072 04:32:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.072 04:32:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.072 04:32:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.072 04:32:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.072 04:32:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.072 04:32:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.072 04:32:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:07.072 04:32:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.072 04:32:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.072 "name": "raid_bdev1", 00:16:07.072 "uuid": "1b63f30c-0468-4d34-bd03-b2d4d6bfd5b6", 00:16:07.072 "strip_size_kb": 0, 00:16:07.072 "state": "online", 00:16:07.072 "raid_level": "raid1", 00:16:07.072 "superblock": true, 00:16:07.072 "num_base_bdevs": 2, 00:16:07.072 "num_base_bdevs_discovered": 1, 00:16:07.072 "num_base_bdevs_operational": 1, 00:16:07.072 "base_bdevs_list": [ 00:16:07.072 { 00:16:07.072 "name": null, 00:16:07.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.072 "is_configured": false, 00:16:07.072 "data_offset": 0, 00:16:07.072 "data_size": 7936 00:16:07.072 }, 00:16:07.072 { 00:16:07.072 "name": "BaseBdev2", 00:16:07.072 "uuid": "f21f1550-930d-5ce8-a926-bbe4e14480f2", 00:16:07.072 "is_configured": true, 00:16:07.072 "data_offset": 256, 00:16:07.072 "data_size": 7936 00:16:07.072 } 00:16:07.072 ] 00:16:07.072 }' 00:16:07.072 04:32:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.072 04:32:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:07.642 04:32:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:07.642 04:32:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:07.642 04:32:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:07.642 04:32:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:07.642 04:32:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:07.642 04:32:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.642 04:32:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.642 04:32:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.642 04:32:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:07.642 04:32:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.642 04:32:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:07.642 "name": "raid_bdev1", 00:16:07.642 "uuid": "1b63f30c-0468-4d34-bd03-b2d4d6bfd5b6", 00:16:07.642 "strip_size_kb": 0, 00:16:07.642 "state": "online", 00:16:07.642 "raid_level": "raid1", 00:16:07.642 "superblock": true, 00:16:07.642 "num_base_bdevs": 2, 00:16:07.642 "num_base_bdevs_discovered": 1, 00:16:07.642 "num_base_bdevs_operational": 1, 00:16:07.642 "base_bdevs_list": [ 00:16:07.642 { 00:16:07.642 "name": null, 00:16:07.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.642 "is_configured": false, 00:16:07.642 "data_offset": 0, 00:16:07.642 "data_size": 7936 00:16:07.642 }, 00:16:07.642 { 00:16:07.642 "name": "BaseBdev2", 00:16:07.642 "uuid": "f21f1550-930d-5ce8-a926-bbe4e14480f2", 00:16:07.642 "is_configured": true, 00:16:07.642 "data_offset": 256, 00:16:07.642 "data_size": 7936 00:16:07.642 } 00:16:07.642 ] 00:16:07.642 }' 00:16:07.642 04:32:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:07.642 04:32:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:07.642 04:32:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:07.642 04:32:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:07.642 04:32:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:07.642 04:32:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.642 04:32:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:07.642 [2024-12-13 04:32:07.486235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:07.642 [2024-12-13 04:32:07.492220] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019ca30 00:16:07.642 04:32:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.642 04:32:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:07.642 [2024-12-13 04:32:07.494394] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:08.582 04:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:08.582 04:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.582 04:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:08.582 04:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:08.582 04:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.582 04:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.582 04:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.582 04:32:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.582 04:32:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:08.582 04:32:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.582 04:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.582 "name": "raid_bdev1", 00:16:08.582 "uuid": "1b63f30c-0468-4d34-bd03-b2d4d6bfd5b6", 00:16:08.582 "strip_size_kb": 0, 00:16:08.582 "state": "online", 00:16:08.582 "raid_level": "raid1", 00:16:08.582 "superblock": true, 00:16:08.582 "num_base_bdevs": 2, 00:16:08.582 "num_base_bdevs_discovered": 2, 00:16:08.582 "num_base_bdevs_operational": 2, 00:16:08.582 "process": { 00:16:08.582 "type": "rebuild", 00:16:08.582 "target": "spare", 00:16:08.582 "progress": { 00:16:08.582 "blocks": 2560, 00:16:08.582 "percent": 32 00:16:08.582 } 00:16:08.582 }, 00:16:08.582 "base_bdevs_list": [ 00:16:08.582 { 00:16:08.582 "name": "spare", 00:16:08.582 "uuid": "3f04640e-c4e8-5e8c-8e43-0c139f6a62ba", 00:16:08.582 "is_configured": true, 00:16:08.582 "data_offset": 256, 00:16:08.582 "data_size": 7936 00:16:08.582 }, 00:16:08.582 { 00:16:08.582 "name": "BaseBdev2", 00:16:08.582 "uuid": "f21f1550-930d-5ce8-a926-bbe4e14480f2", 00:16:08.582 "is_configured": true, 00:16:08.582 "data_offset": 256, 00:16:08.582 "data_size": 7936 00:16:08.582 } 00:16:08.582 ] 00:16:08.582 }' 00:16:08.582 04:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.842 04:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:08.842 04:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.842 04:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:08.842 04:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:08.842 04:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:08.842 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:08.842 04:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:08.842 04:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:08.842 04:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:08.842 04:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=577 00:16:08.842 04:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:08.842 04:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:08.842 04:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.842 04:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:08.842 04:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:08.842 04:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.842 04:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.842 04:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.842 04:32:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.842 04:32:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:08.842 04:32:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.842 04:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.842 "name": "raid_bdev1", 00:16:08.842 "uuid": "1b63f30c-0468-4d34-bd03-b2d4d6bfd5b6", 00:16:08.842 "strip_size_kb": 0, 00:16:08.842 "state": "online", 00:16:08.842 "raid_level": "raid1", 00:16:08.842 "superblock": true, 00:16:08.842 "num_base_bdevs": 2, 00:16:08.842 "num_base_bdevs_discovered": 2, 00:16:08.842 "num_base_bdevs_operational": 2, 00:16:08.842 "process": { 00:16:08.842 "type": "rebuild", 00:16:08.842 "target": "spare", 00:16:08.842 "progress": { 00:16:08.842 "blocks": 2816, 00:16:08.842 "percent": 35 00:16:08.842 } 00:16:08.842 }, 00:16:08.842 "base_bdevs_list": [ 00:16:08.842 { 00:16:08.842 "name": "spare", 00:16:08.842 "uuid": "3f04640e-c4e8-5e8c-8e43-0c139f6a62ba", 00:16:08.842 "is_configured": true, 00:16:08.842 "data_offset": 256, 00:16:08.842 "data_size": 7936 00:16:08.842 }, 00:16:08.842 { 00:16:08.842 "name": "BaseBdev2", 00:16:08.842 "uuid": "f21f1550-930d-5ce8-a926-bbe4e14480f2", 00:16:08.842 "is_configured": true, 00:16:08.842 "data_offset": 256, 00:16:08.842 "data_size": 7936 00:16:08.842 } 00:16:08.842 ] 00:16:08.842 }' 00:16:08.842 04:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.842 04:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:08.842 04:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.842 04:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:08.842 04:32:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:09.782 04:32:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:09.782 04:32:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:09.782 04:32:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:09.782 04:32:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:09.782 04:32:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:09.782 04:32:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:09.782 04:32:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.782 04:32:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.782 04:32:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.782 04:32:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:10.042 04:32:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.042 04:32:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.042 "name": "raid_bdev1", 00:16:10.042 "uuid": "1b63f30c-0468-4d34-bd03-b2d4d6bfd5b6", 00:16:10.042 "strip_size_kb": 0, 00:16:10.042 "state": "online", 00:16:10.042 "raid_level": "raid1", 00:16:10.042 "superblock": true, 00:16:10.042 "num_base_bdevs": 2, 00:16:10.042 "num_base_bdevs_discovered": 2, 00:16:10.042 "num_base_bdevs_operational": 2, 00:16:10.042 "process": { 00:16:10.042 "type": "rebuild", 00:16:10.042 "target": "spare", 00:16:10.042 "progress": { 00:16:10.042 "blocks": 5632, 00:16:10.042 "percent": 70 00:16:10.042 } 00:16:10.042 }, 00:16:10.042 "base_bdevs_list": [ 00:16:10.042 { 00:16:10.042 "name": "spare", 00:16:10.042 "uuid": "3f04640e-c4e8-5e8c-8e43-0c139f6a62ba", 00:16:10.042 "is_configured": true, 00:16:10.042 "data_offset": 256, 00:16:10.042 "data_size": 7936 00:16:10.042 }, 00:16:10.042 { 00:16:10.042 "name": "BaseBdev2", 00:16:10.042 "uuid": "f21f1550-930d-5ce8-a926-bbe4e14480f2", 00:16:10.042 "is_configured": true, 00:16:10.042 "data_offset": 256, 00:16:10.042 "data_size": 7936 00:16:10.042 } 00:16:10.042 ] 00:16:10.042 }' 00:16:10.042 04:32:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.042 04:32:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:10.042 04:32:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.043 04:32:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:10.043 04:32:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:10.613 [2024-12-13 04:32:10.613241] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:10.613 [2024-12-13 04:32:10.613364] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:10.613 [2024-12-13 04:32:10.613511] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.183 04:32:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:11.183 04:32:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:11.183 04:32:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.183 04:32:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:11.183 04:32:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:11.183 04:32:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.183 04:32:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.183 04:32:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.183 04:32:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.183 04:32:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:11.183 04:32:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.183 04:32:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.183 "name": "raid_bdev1", 00:16:11.183 "uuid": "1b63f30c-0468-4d34-bd03-b2d4d6bfd5b6", 00:16:11.183 "strip_size_kb": 0, 00:16:11.183 "state": "online", 00:16:11.183 "raid_level": "raid1", 00:16:11.183 "superblock": true, 00:16:11.183 "num_base_bdevs": 2, 00:16:11.183 "num_base_bdevs_discovered": 2, 00:16:11.183 "num_base_bdevs_operational": 2, 00:16:11.183 "base_bdevs_list": [ 00:16:11.183 { 00:16:11.183 "name": "spare", 00:16:11.183 "uuid": "3f04640e-c4e8-5e8c-8e43-0c139f6a62ba", 00:16:11.183 "is_configured": true, 00:16:11.183 "data_offset": 256, 00:16:11.183 "data_size": 7936 00:16:11.183 }, 00:16:11.183 { 00:16:11.183 "name": "BaseBdev2", 00:16:11.183 "uuid": "f21f1550-930d-5ce8-a926-bbe4e14480f2", 00:16:11.183 "is_configured": true, 00:16:11.183 "data_offset": 256, 00:16:11.183 "data_size": 7936 00:16:11.183 } 00:16:11.183 ] 00:16:11.183 }' 00:16:11.183 04:32:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.183 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:11.183 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.183 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:11.183 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:16:11.183 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:11.183 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.183 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:11.183 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:11.183 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.183 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.183 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.183 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.183 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:11.183 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.183 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.183 "name": "raid_bdev1", 00:16:11.183 "uuid": "1b63f30c-0468-4d34-bd03-b2d4d6bfd5b6", 00:16:11.183 "strip_size_kb": 0, 00:16:11.183 "state": "online", 00:16:11.183 "raid_level": "raid1", 00:16:11.183 "superblock": true, 00:16:11.183 "num_base_bdevs": 2, 00:16:11.183 "num_base_bdevs_discovered": 2, 00:16:11.183 "num_base_bdevs_operational": 2, 00:16:11.183 "base_bdevs_list": [ 00:16:11.183 { 00:16:11.183 "name": "spare", 00:16:11.183 "uuid": "3f04640e-c4e8-5e8c-8e43-0c139f6a62ba", 00:16:11.183 "is_configured": true, 00:16:11.183 "data_offset": 256, 00:16:11.183 "data_size": 7936 00:16:11.183 }, 00:16:11.183 { 00:16:11.183 "name": "BaseBdev2", 00:16:11.183 "uuid": "f21f1550-930d-5ce8-a926-bbe4e14480f2", 00:16:11.183 "is_configured": true, 00:16:11.183 "data_offset": 256, 00:16:11.183 "data_size": 7936 00:16:11.183 } 00:16:11.183 ] 00:16:11.183 }' 00:16:11.183 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.443 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:11.443 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.443 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:11.443 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:11.443 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:11.443 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:11.443 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:11.443 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:11.443 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:11.443 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.443 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.443 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.443 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.443 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.443 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.443 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.443 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:11.443 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.443 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.443 "name": "raid_bdev1", 00:16:11.443 "uuid": "1b63f30c-0468-4d34-bd03-b2d4d6bfd5b6", 00:16:11.443 "strip_size_kb": 0, 00:16:11.443 "state": "online", 00:16:11.443 "raid_level": "raid1", 00:16:11.443 "superblock": true, 00:16:11.443 "num_base_bdevs": 2, 00:16:11.443 "num_base_bdevs_discovered": 2, 00:16:11.443 "num_base_bdevs_operational": 2, 00:16:11.443 "base_bdevs_list": [ 00:16:11.443 { 00:16:11.443 "name": "spare", 00:16:11.443 "uuid": "3f04640e-c4e8-5e8c-8e43-0c139f6a62ba", 00:16:11.443 "is_configured": true, 00:16:11.443 "data_offset": 256, 00:16:11.443 "data_size": 7936 00:16:11.443 }, 00:16:11.443 { 00:16:11.443 "name": "BaseBdev2", 00:16:11.443 "uuid": "f21f1550-930d-5ce8-a926-bbe4e14480f2", 00:16:11.443 "is_configured": true, 00:16:11.443 "data_offset": 256, 00:16:11.443 "data_size": 7936 00:16:11.443 } 00:16:11.443 ] 00:16:11.443 }' 00:16:11.443 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.443 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:11.703 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:11.703 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.703 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:11.703 [2024-12-13 04:32:11.661974] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:11.703 [2024-12-13 04:32:11.662047] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:11.703 [2024-12-13 04:32:11.662173] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:11.703 [2024-12-13 04:32:11.662268] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:11.703 [2024-12-13 04:32:11.662325] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:16:11.703 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.703 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.703 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:16:11.703 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.703 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:11.703 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.963 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:11.963 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:11.963 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:11.963 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:11.963 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:11.963 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:11.963 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:11.963 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:11.963 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:11.963 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:16:11.963 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:11.963 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:11.963 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:11.963 /dev/nbd0 00:16:11.963 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:11.963 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:11.963 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:11.963 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:16:11.964 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:11.964 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:11.964 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:11.964 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:16:11.964 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:11.964 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:11.964 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:11.964 1+0 records in 00:16:11.964 1+0 records out 00:16:11.964 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000484825 s, 8.4 MB/s 00:16:11.964 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:11.964 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:16:11.964 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:11.964 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:11.964 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:16:11.964 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:11.964 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:11.964 04:32:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:12.224 /dev/nbd1 00:16:12.224 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:12.224 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:12.224 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:12.224 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:16:12.224 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:12.224 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:12.224 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:12.224 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:16:12.224 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:12.224 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:12.224 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:12.224 1+0 records in 00:16:12.224 1+0 records out 00:16:12.224 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000408191 s, 10.0 MB/s 00:16:12.224 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:12.224 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:16:12.224 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:12.224 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:12.224 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:16:12.224 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:12.224 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:12.224 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:12.484 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:12.484 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:12.484 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:12.484 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:12.484 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:16:12.484 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:12.484 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:12.484 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:12.745 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:12.745 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:12.745 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:12.745 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:12.745 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:12.745 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:16:12.745 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:16:12.745 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:12.745 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:12.745 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:12.745 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:12.745 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:12.745 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:12.745 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:12.745 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:12.745 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:16:12.745 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:16:12.745 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:12.745 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:12.745 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.745 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:12.745 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.745 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:12.745 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.745 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:12.745 [2024-12-13 04:32:12.750865] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:12.745 [2024-12-13 04:32:12.750923] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:12.745 [2024-12-13 04:32:12.750946] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:12.745 [2024-12-13 04:32:12.750967] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:12.745 [2024-12-13 04:32:12.753354] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:12.745 [2024-12-13 04:32:12.753427] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:12.745 [2024-12-13 04:32:12.753541] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:12.745 [2024-12-13 04:32:12.753622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:12.745 [2024-12-13 04:32:12.753770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:12.745 spare 00:16:12.745 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.745 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:12.745 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.745 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:13.005 [2024-12-13 04:32:12.853699] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:16:13.005 [2024-12-13 04:32:12.853757] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:13.005 [2024-12-13 04:32:12.854057] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb1b0 00:16:13.005 [2024-12-13 04:32:12.854246] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:16:13.005 [2024-12-13 04:32:12.854295] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:16:13.005 [2024-12-13 04:32:12.854482] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:13.005 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.005 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:13.005 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:13.005 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:13.005 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:13.005 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:13.005 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:13.005 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.005 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.005 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.005 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.005 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.005 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.005 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.005 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:13.005 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.005 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.005 "name": "raid_bdev1", 00:16:13.005 "uuid": "1b63f30c-0468-4d34-bd03-b2d4d6bfd5b6", 00:16:13.005 "strip_size_kb": 0, 00:16:13.005 "state": "online", 00:16:13.005 "raid_level": "raid1", 00:16:13.005 "superblock": true, 00:16:13.005 "num_base_bdevs": 2, 00:16:13.005 "num_base_bdevs_discovered": 2, 00:16:13.005 "num_base_bdevs_operational": 2, 00:16:13.005 "base_bdevs_list": [ 00:16:13.005 { 00:16:13.005 "name": "spare", 00:16:13.005 "uuid": "3f04640e-c4e8-5e8c-8e43-0c139f6a62ba", 00:16:13.005 "is_configured": true, 00:16:13.005 "data_offset": 256, 00:16:13.005 "data_size": 7936 00:16:13.005 }, 00:16:13.005 { 00:16:13.005 "name": "BaseBdev2", 00:16:13.005 "uuid": "f21f1550-930d-5ce8-a926-bbe4e14480f2", 00:16:13.005 "is_configured": true, 00:16:13.005 "data_offset": 256, 00:16:13.005 "data_size": 7936 00:16:13.005 } 00:16:13.005 ] 00:16:13.005 }' 00:16:13.005 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.005 04:32:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:13.575 04:32:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:13.575 04:32:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.575 04:32:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:13.575 04:32:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:13.575 04:32:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.575 04:32:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.575 04:32:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.575 04:32:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.575 04:32:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:13.575 04:32:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.575 04:32:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.575 "name": "raid_bdev1", 00:16:13.575 "uuid": "1b63f30c-0468-4d34-bd03-b2d4d6bfd5b6", 00:16:13.575 "strip_size_kb": 0, 00:16:13.575 "state": "online", 00:16:13.575 "raid_level": "raid1", 00:16:13.575 "superblock": true, 00:16:13.575 "num_base_bdevs": 2, 00:16:13.575 "num_base_bdevs_discovered": 2, 00:16:13.575 "num_base_bdevs_operational": 2, 00:16:13.575 "base_bdevs_list": [ 00:16:13.575 { 00:16:13.575 "name": "spare", 00:16:13.575 "uuid": "3f04640e-c4e8-5e8c-8e43-0c139f6a62ba", 00:16:13.575 "is_configured": true, 00:16:13.575 "data_offset": 256, 00:16:13.575 "data_size": 7936 00:16:13.575 }, 00:16:13.575 { 00:16:13.575 "name": "BaseBdev2", 00:16:13.575 "uuid": "f21f1550-930d-5ce8-a926-bbe4e14480f2", 00:16:13.575 "is_configured": true, 00:16:13.575 "data_offset": 256, 00:16:13.575 "data_size": 7936 00:16:13.575 } 00:16:13.575 ] 00:16:13.575 }' 00:16:13.575 04:32:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.575 04:32:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:13.575 04:32:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.575 04:32:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:13.575 04:32:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.575 04:32:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:13.575 04:32:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.575 04:32:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:13.575 04:32:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.575 04:32:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:13.575 04:32:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:13.575 04:32:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.575 04:32:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:13.575 [2024-12-13 04:32:13.505614] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:13.575 04:32:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.575 04:32:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:13.575 04:32:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:13.575 04:32:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:13.575 04:32:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:13.575 04:32:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:13.575 04:32:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:13.575 04:32:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.575 04:32:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.575 04:32:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.575 04:32:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.575 04:32:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.575 04:32:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.575 04:32:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.575 04:32:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:13.575 04:32:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.575 04:32:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.575 "name": "raid_bdev1", 00:16:13.575 "uuid": "1b63f30c-0468-4d34-bd03-b2d4d6bfd5b6", 00:16:13.575 "strip_size_kb": 0, 00:16:13.575 "state": "online", 00:16:13.575 "raid_level": "raid1", 00:16:13.575 "superblock": true, 00:16:13.575 "num_base_bdevs": 2, 00:16:13.575 "num_base_bdevs_discovered": 1, 00:16:13.575 "num_base_bdevs_operational": 1, 00:16:13.575 "base_bdevs_list": [ 00:16:13.575 { 00:16:13.575 "name": null, 00:16:13.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.575 "is_configured": false, 00:16:13.575 "data_offset": 0, 00:16:13.575 "data_size": 7936 00:16:13.575 }, 00:16:13.575 { 00:16:13.575 "name": "BaseBdev2", 00:16:13.575 "uuid": "f21f1550-930d-5ce8-a926-bbe4e14480f2", 00:16:13.575 "is_configured": true, 00:16:13.575 "data_offset": 256, 00:16:13.575 "data_size": 7936 00:16:13.575 } 00:16:13.575 ] 00:16:13.575 }' 00:16:13.575 04:32:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.575 04:32:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:14.145 04:32:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:14.145 04:32:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.145 04:32:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:14.145 [2024-12-13 04:32:13.936883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:14.145 [2024-12-13 04:32:13.937063] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:14.145 [2024-12-13 04:32:13.937126] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:14.145 [2024-12-13 04:32:13.937181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:14.145 [2024-12-13 04:32:13.945609] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb280 00:16:14.145 04:32:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.145 04:32:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:14.145 [2024-12-13 04:32:13.947783] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:15.086 04:32:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:15.086 04:32:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.086 04:32:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:15.086 04:32:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:15.086 04:32:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.086 04:32:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.086 04:32:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.086 04:32:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.086 04:32:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:15.086 04:32:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.086 04:32:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.086 "name": "raid_bdev1", 00:16:15.086 "uuid": "1b63f30c-0468-4d34-bd03-b2d4d6bfd5b6", 00:16:15.086 "strip_size_kb": 0, 00:16:15.086 "state": "online", 00:16:15.086 "raid_level": "raid1", 00:16:15.086 "superblock": true, 00:16:15.086 "num_base_bdevs": 2, 00:16:15.086 "num_base_bdevs_discovered": 2, 00:16:15.086 "num_base_bdevs_operational": 2, 00:16:15.086 "process": { 00:16:15.086 "type": "rebuild", 00:16:15.086 "target": "spare", 00:16:15.086 "progress": { 00:16:15.086 "blocks": 2560, 00:16:15.086 "percent": 32 00:16:15.086 } 00:16:15.086 }, 00:16:15.086 "base_bdevs_list": [ 00:16:15.086 { 00:16:15.086 "name": "spare", 00:16:15.086 "uuid": "3f04640e-c4e8-5e8c-8e43-0c139f6a62ba", 00:16:15.086 "is_configured": true, 00:16:15.086 "data_offset": 256, 00:16:15.086 "data_size": 7936 00:16:15.086 }, 00:16:15.086 { 00:16:15.086 "name": "BaseBdev2", 00:16:15.086 "uuid": "f21f1550-930d-5ce8-a926-bbe4e14480f2", 00:16:15.086 "is_configured": true, 00:16:15.086 "data_offset": 256, 00:16:15.086 "data_size": 7936 00:16:15.086 } 00:16:15.086 ] 00:16:15.086 }' 00:16:15.086 04:32:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.086 04:32:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:15.086 04:32:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.345 04:32:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:15.345 04:32:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:15.346 04:32:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.346 04:32:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:15.346 [2024-12-13 04:32:15.111598] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:15.346 [2024-12-13 04:32:15.155123] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:15.346 [2024-12-13 04:32:15.155174] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:15.346 [2024-12-13 04:32:15.155191] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:15.346 [2024-12-13 04:32:15.155197] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:15.346 04:32:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.346 04:32:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:15.346 04:32:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:15.346 04:32:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:15.346 04:32:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:15.346 04:32:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:15.346 04:32:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:15.346 04:32:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.346 04:32:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.346 04:32:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.346 04:32:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.346 04:32:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.346 04:32:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.346 04:32:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.346 04:32:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:15.346 04:32:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.346 04:32:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.346 "name": "raid_bdev1", 00:16:15.346 "uuid": "1b63f30c-0468-4d34-bd03-b2d4d6bfd5b6", 00:16:15.346 "strip_size_kb": 0, 00:16:15.346 "state": "online", 00:16:15.346 "raid_level": "raid1", 00:16:15.346 "superblock": true, 00:16:15.346 "num_base_bdevs": 2, 00:16:15.346 "num_base_bdevs_discovered": 1, 00:16:15.346 "num_base_bdevs_operational": 1, 00:16:15.346 "base_bdevs_list": [ 00:16:15.346 { 00:16:15.346 "name": null, 00:16:15.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.346 "is_configured": false, 00:16:15.346 "data_offset": 0, 00:16:15.346 "data_size": 7936 00:16:15.346 }, 00:16:15.346 { 00:16:15.346 "name": "BaseBdev2", 00:16:15.346 "uuid": "f21f1550-930d-5ce8-a926-bbe4e14480f2", 00:16:15.346 "is_configured": true, 00:16:15.346 "data_offset": 256, 00:16:15.346 "data_size": 7936 00:16:15.346 } 00:16:15.346 ] 00:16:15.346 }' 00:16:15.346 04:32:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.346 04:32:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:15.916 04:32:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:15.916 04:32:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.916 04:32:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:15.916 [2024-12-13 04:32:15.645187] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:15.916 [2024-12-13 04:32:15.645306] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.916 [2024-12-13 04:32:15.645350] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:15.916 [2024-12-13 04:32:15.645378] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.916 [2024-12-13 04:32:15.645870] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.916 [2024-12-13 04:32:15.645932] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:15.916 [2024-12-13 04:32:15.646044] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:15.916 [2024-12-13 04:32:15.646084] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:15.916 [2024-12-13 04:32:15.646131] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:15.916 [2024-12-13 04:32:15.646200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:15.916 [2024-12-13 04:32:15.651314] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb350 00:16:15.916 spare 00:16:15.916 04:32:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.916 04:32:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:15.916 [2024-12-13 04:32:15.653498] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:16.856 04:32:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:16.856 04:32:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.856 04:32:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:16.856 04:32:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:16.856 04:32:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.856 04:32:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.856 04:32:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.856 04:32:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.856 04:32:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:16.856 04:32:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.856 04:32:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.856 "name": "raid_bdev1", 00:16:16.856 "uuid": "1b63f30c-0468-4d34-bd03-b2d4d6bfd5b6", 00:16:16.856 "strip_size_kb": 0, 00:16:16.856 "state": "online", 00:16:16.856 "raid_level": "raid1", 00:16:16.856 "superblock": true, 00:16:16.856 "num_base_bdevs": 2, 00:16:16.856 "num_base_bdevs_discovered": 2, 00:16:16.856 "num_base_bdevs_operational": 2, 00:16:16.856 "process": { 00:16:16.856 "type": "rebuild", 00:16:16.856 "target": "spare", 00:16:16.856 "progress": { 00:16:16.856 "blocks": 2560, 00:16:16.856 "percent": 32 00:16:16.856 } 00:16:16.856 }, 00:16:16.856 "base_bdevs_list": [ 00:16:16.856 { 00:16:16.856 "name": "spare", 00:16:16.856 "uuid": "3f04640e-c4e8-5e8c-8e43-0c139f6a62ba", 00:16:16.856 "is_configured": true, 00:16:16.856 "data_offset": 256, 00:16:16.856 "data_size": 7936 00:16:16.856 }, 00:16:16.856 { 00:16:16.856 "name": "BaseBdev2", 00:16:16.856 "uuid": "f21f1550-930d-5ce8-a926-bbe4e14480f2", 00:16:16.856 "is_configured": true, 00:16:16.856 "data_offset": 256, 00:16:16.856 "data_size": 7936 00:16:16.856 } 00:16:16.856 ] 00:16:16.856 }' 00:16:16.856 04:32:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.856 04:32:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:16.856 04:32:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.856 04:32:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:16.856 04:32:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:16.856 04:32:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.856 04:32:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:16.856 [2024-12-13 04:32:16.797425] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:16.856 [2024-12-13 04:32:16.860914] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:16.856 [2024-12-13 04:32:16.860971] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.856 [2024-12-13 04:32:16.860985] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:16.856 [2024-12-13 04:32:16.860994] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:17.117 04:32:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.117 04:32:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:17.117 04:32:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:17.117 04:32:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:17.117 04:32:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:17.117 04:32:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:17.117 04:32:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:17.117 04:32:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.117 04:32:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.117 04:32:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.117 04:32:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.117 04:32:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.117 04:32:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.117 04:32:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.117 04:32:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:17.117 04:32:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.117 04:32:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.117 "name": "raid_bdev1", 00:16:17.117 "uuid": "1b63f30c-0468-4d34-bd03-b2d4d6bfd5b6", 00:16:17.117 "strip_size_kb": 0, 00:16:17.117 "state": "online", 00:16:17.117 "raid_level": "raid1", 00:16:17.117 "superblock": true, 00:16:17.117 "num_base_bdevs": 2, 00:16:17.117 "num_base_bdevs_discovered": 1, 00:16:17.117 "num_base_bdevs_operational": 1, 00:16:17.117 "base_bdevs_list": [ 00:16:17.117 { 00:16:17.117 "name": null, 00:16:17.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.117 "is_configured": false, 00:16:17.117 "data_offset": 0, 00:16:17.117 "data_size": 7936 00:16:17.117 }, 00:16:17.117 { 00:16:17.117 "name": "BaseBdev2", 00:16:17.117 "uuid": "f21f1550-930d-5ce8-a926-bbe4e14480f2", 00:16:17.117 "is_configured": true, 00:16:17.117 "data_offset": 256, 00:16:17.117 "data_size": 7936 00:16:17.117 } 00:16:17.117 ] 00:16:17.117 }' 00:16:17.117 04:32:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.117 04:32:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:17.377 04:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:17.377 04:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:17.377 04:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:17.377 04:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:17.377 04:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:17.377 04:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.377 04:32:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.377 04:32:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:17.377 04:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.377 04:32:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.377 04:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:17.377 "name": "raid_bdev1", 00:16:17.377 "uuid": "1b63f30c-0468-4d34-bd03-b2d4d6bfd5b6", 00:16:17.377 "strip_size_kb": 0, 00:16:17.377 "state": "online", 00:16:17.377 "raid_level": "raid1", 00:16:17.377 "superblock": true, 00:16:17.377 "num_base_bdevs": 2, 00:16:17.377 "num_base_bdevs_discovered": 1, 00:16:17.377 "num_base_bdevs_operational": 1, 00:16:17.377 "base_bdevs_list": [ 00:16:17.377 { 00:16:17.377 "name": null, 00:16:17.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.377 "is_configured": false, 00:16:17.377 "data_offset": 0, 00:16:17.377 "data_size": 7936 00:16:17.377 }, 00:16:17.377 { 00:16:17.377 "name": "BaseBdev2", 00:16:17.377 "uuid": "f21f1550-930d-5ce8-a926-bbe4e14480f2", 00:16:17.377 "is_configured": true, 00:16:17.377 "data_offset": 256, 00:16:17.377 "data_size": 7936 00:16:17.377 } 00:16:17.377 ] 00:16:17.377 }' 00:16:17.377 04:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:17.377 04:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:17.377 04:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:17.637 04:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:17.637 04:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:17.637 04:32:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.637 04:32:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:17.637 04:32:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.637 04:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:17.637 04:32:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.637 04:32:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:17.637 [2024-12-13 04:32:17.431049] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:17.637 [2024-12-13 04:32:17.431100] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.637 [2024-12-13 04:32:17.431123] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:16:17.637 [2024-12-13 04:32:17.431134] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.637 [2024-12-13 04:32:17.431580] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.637 [2024-12-13 04:32:17.431602] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:17.637 [2024-12-13 04:32:17.431670] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:17.637 [2024-12-13 04:32:17.431702] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:17.637 [2024-12-13 04:32:17.431710] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:17.637 [2024-12-13 04:32:17.431726] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:17.637 BaseBdev1 00:16:17.637 04:32:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.637 04:32:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:18.577 04:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:18.577 04:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:18.577 04:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.577 04:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:18.577 04:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:18.577 04:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:18.577 04:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.577 04:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.577 04:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.577 04:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.577 04:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.577 04:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.577 04:32:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.577 04:32:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:18.577 04:32:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.577 04:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.577 "name": "raid_bdev1", 00:16:18.577 "uuid": "1b63f30c-0468-4d34-bd03-b2d4d6bfd5b6", 00:16:18.577 "strip_size_kb": 0, 00:16:18.577 "state": "online", 00:16:18.577 "raid_level": "raid1", 00:16:18.578 "superblock": true, 00:16:18.578 "num_base_bdevs": 2, 00:16:18.578 "num_base_bdevs_discovered": 1, 00:16:18.578 "num_base_bdevs_operational": 1, 00:16:18.578 "base_bdevs_list": [ 00:16:18.578 { 00:16:18.578 "name": null, 00:16:18.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.578 "is_configured": false, 00:16:18.578 "data_offset": 0, 00:16:18.578 "data_size": 7936 00:16:18.578 }, 00:16:18.578 { 00:16:18.578 "name": "BaseBdev2", 00:16:18.578 "uuid": "f21f1550-930d-5ce8-a926-bbe4e14480f2", 00:16:18.578 "is_configured": true, 00:16:18.578 "data_offset": 256, 00:16:18.578 "data_size": 7936 00:16:18.578 } 00:16:18.578 ] 00:16:18.578 }' 00:16:18.578 04:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.578 04:32:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:19.147 04:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:19.147 04:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:19.147 04:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:19.147 04:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:19.148 04:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:19.148 04:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.148 04:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.148 04:32:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.148 04:32:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:19.148 04:32:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.148 04:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:19.148 "name": "raid_bdev1", 00:16:19.148 "uuid": "1b63f30c-0468-4d34-bd03-b2d4d6bfd5b6", 00:16:19.148 "strip_size_kb": 0, 00:16:19.148 "state": "online", 00:16:19.148 "raid_level": "raid1", 00:16:19.148 "superblock": true, 00:16:19.148 "num_base_bdevs": 2, 00:16:19.148 "num_base_bdevs_discovered": 1, 00:16:19.148 "num_base_bdevs_operational": 1, 00:16:19.148 "base_bdevs_list": [ 00:16:19.148 { 00:16:19.148 "name": null, 00:16:19.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.148 "is_configured": false, 00:16:19.148 "data_offset": 0, 00:16:19.148 "data_size": 7936 00:16:19.148 }, 00:16:19.148 { 00:16:19.148 "name": "BaseBdev2", 00:16:19.148 "uuid": "f21f1550-930d-5ce8-a926-bbe4e14480f2", 00:16:19.148 "is_configured": true, 00:16:19.148 "data_offset": 256, 00:16:19.148 "data_size": 7936 00:16:19.148 } 00:16:19.148 ] 00:16:19.148 }' 00:16:19.148 04:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:19.148 04:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:19.148 04:32:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:19.148 04:32:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:19.148 04:32:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:19.148 04:32:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:16:19.148 04:32:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:19.148 04:32:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:19.148 04:32:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:19.148 04:32:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:19.148 04:32:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:19.148 04:32:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:19.148 04:32:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.148 04:32:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:19.148 [2024-12-13 04:32:19.016500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:19.148 [2024-12-13 04:32:19.016611] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:19.148 [2024-12-13 04:32:19.016624] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:19.148 request: 00:16:19.148 { 00:16:19.148 "base_bdev": "BaseBdev1", 00:16:19.148 "raid_bdev": "raid_bdev1", 00:16:19.148 "method": "bdev_raid_add_base_bdev", 00:16:19.148 "req_id": 1 00:16:19.148 } 00:16:19.148 Got JSON-RPC error response 00:16:19.148 response: 00:16:19.148 { 00:16:19.148 "code": -22, 00:16:19.148 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:19.148 } 00:16:19.148 04:32:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:19.148 04:32:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:16:19.148 04:32:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:19.148 04:32:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:19.148 04:32:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:19.148 04:32:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:20.087 04:32:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:20.087 04:32:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:20.087 04:32:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:20.087 04:32:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:20.087 04:32:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:20.087 04:32:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:20.087 04:32:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.087 04:32:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.087 04:32:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.087 04:32:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.087 04:32:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.087 04:32:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.087 04:32:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.087 04:32:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:20.087 04:32:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.087 04:32:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.087 "name": "raid_bdev1", 00:16:20.087 "uuid": "1b63f30c-0468-4d34-bd03-b2d4d6bfd5b6", 00:16:20.087 "strip_size_kb": 0, 00:16:20.087 "state": "online", 00:16:20.087 "raid_level": "raid1", 00:16:20.087 "superblock": true, 00:16:20.087 "num_base_bdevs": 2, 00:16:20.087 "num_base_bdevs_discovered": 1, 00:16:20.087 "num_base_bdevs_operational": 1, 00:16:20.087 "base_bdevs_list": [ 00:16:20.087 { 00:16:20.087 "name": null, 00:16:20.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.087 "is_configured": false, 00:16:20.087 "data_offset": 0, 00:16:20.087 "data_size": 7936 00:16:20.087 }, 00:16:20.087 { 00:16:20.087 "name": "BaseBdev2", 00:16:20.087 "uuid": "f21f1550-930d-5ce8-a926-bbe4e14480f2", 00:16:20.087 "is_configured": true, 00:16:20.087 "data_offset": 256, 00:16:20.087 "data_size": 7936 00:16:20.087 } 00:16:20.087 ] 00:16:20.087 }' 00:16:20.087 04:32:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.087 04:32:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:20.657 04:32:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:20.657 04:32:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:20.657 04:32:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:20.657 04:32:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:20.657 04:32:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:20.657 04:32:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.657 04:32:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.657 04:32:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.657 04:32:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:20.657 04:32:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.657 04:32:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:20.657 "name": "raid_bdev1", 00:16:20.657 "uuid": "1b63f30c-0468-4d34-bd03-b2d4d6bfd5b6", 00:16:20.657 "strip_size_kb": 0, 00:16:20.657 "state": "online", 00:16:20.657 "raid_level": "raid1", 00:16:20.657 "superblock": true, 00:16:20.657 "num_base_bdevs": 2, 00:16:20.657 "num_base_bdevs_discovered": 1, 00:16:20.657 "num_base_bdevs_operational": 1, 00:16:20.657 "base_bdevs_list": [ 00:16:20.657 { 00:16:20.657 "name": null, 00:16:20.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.657 "is_configured": false, 00:16:20.657 "data_offset": 0, 00:16:20.657 "data_size": 7936 00:16:20.657 }, 00:16:20.657 { 00:16:20.657 "name": "BaseBdev2", 00:16:20.657 "uuid": "f21f1550-930d-5ce8-a926-bbe4e14480f2", 00:16:20.657 "is_configured": true, 00:16:20.657 "data_offset": 256, 00:16:20.657 "data_size": 7936 00:16:20.657 } 00:16:20.657 ] 00:16:20.657 }' 00:16:20.657 04:32:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:20.657 04:32:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:20.657 04:32:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:20.657 04:32:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:20.657 04:32:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 98639 00:16:20.657 04:32:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 98639 ']' 00:16:20.657 04:32:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 98639 00:16:20.657 04:32:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:16:20.657 04:32:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:20.658 04:32:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98639 00:16:20.658 04:32:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:20.658 04:32:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:20.658 04:32:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98639' 00:16:20.658 killing process with pid 98639 00:16:20.658 04:32:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 98639 00:16:20.658 Received shutdown signal, test time was about 60.000000 seconds 00:16:20.658 00:16:20.658 Latency(us) 00:16:20.658 [2024-12-13T04:32:20.673Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:20.658 [2024-12-13T04:32:20.673Z] =================================================================================================================== 00:16:20.658 [2024-12-13T04:32:20.673Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:20.658 [2024-12-13 04:32:20.609672] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:20.658 [2024-12-13 04:32:20.609774] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:20.658 [2024-12-13 04:32:20.609818] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:20.658 [2024-12-13 04:32:20.609827] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:16:20.658 04:32:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 98639 00:16:20.658 [2024-12-13 04:32:20.668505] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:21.228 04:32:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:16:21.228 00:16:21.228 real 0m18.374s 00:16:21.228 user 0m24.197s 00:16:21.228 sys 0m2.715s 00:16:21.228 ************************************ 00:16:21.228 END TEST raid_rebuild_test_sb_4k 00:16:21.228 ************************************ 00:16:21.228 04:32:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:21.228 04:32:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:21.228 04:32:21 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:16:21.228 04:32:21 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:16:21.228 04:32:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:21.228 04:32:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:21.228 04:32:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:21.228 ************************************ 00:16:21.228 START TEST raid_state_function_test_sb_md_separate 00:16:21.228 ************************************ 00:16:21.228 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:16:21.228 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:21.228 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:21.228 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:21.228 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:21.228 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:21.228 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:21.228 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:21.228 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:21.228 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:21.228 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:21.228 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:21.228 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:21.228 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:21.228 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:21.228 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:21.228 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:21.228 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:21.228 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:21.228 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:21.228 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:21.228 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:21.228 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:21.228 Process raid pid: 99317 00:16:21.228 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=99317 00:16:21.228 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:21.228 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 99317' 00:16:21.228 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 99317 00:16:21.228 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 99317 ']' 00:16:21.228 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:21.228 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:21.228 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:21.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:21.228 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:21.228 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:21.228 [2024-12-13 04:32:21.156847] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:16:21.228 [2024-12-13 04:32:21.157039] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:21.488 [2024-12-13 04:32:21.315323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.488 [2024-12-13 04:32:21.355361] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.488 [2024-12-13 04:32:21.432741] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:21.488 [2024-12-13 04:32:21.432781] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:22.058 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:22.058 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:16:22.058 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:22.058 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.058 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.058 [2024-12-13 04:32:21.972086] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:22.058 [2024-12-13 04:32:21.972147] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:22.058 [2024-12-13 04:32:21.972171] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:22.058 [2024-12-13 04:32:21.972182] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:22.058 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.058 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:22.058 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:22.058 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:22.058 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:22.058 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:22.058 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:22.058 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.058 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.058 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.058 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.058 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.058 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.058 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.058 04:32:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.058 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.058 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.058 "name": "Existed_Raid", 00:16:22.058 "uuid": "3fee1a2f-e28f-4b2d-8635-525984f43dd2", 00:16:22.058 "strip_size_kb": 0, 00:16:22.058 "state": "configuring", 00:16:22.058 "raid_level": "raid1", 00:16:22.058 "superblock": true, 00:16:22.058 "num_base_bdevs": 2, 00:16:22.058 "num_base_bdevs_discovered": 0, 00:16:22.058 "num_base_bdevs_operational": 2, 00:16:22.058 "base_bdevs_list": [ 00:16:22.058 { 00:16:22.058 "name": "BaseBdev1", 00:16:22.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.058 "is_configured": false, 00:16:22.058 "data_offset": 0, 00:16:22.058 "data_size": 0 00:16:22.058 }, 00:16:22.058 { 00:16:22.059 "name": "BaseBdev2", 00:16:22.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.059 "is_configured": false, 00:16:22.059 "data_offset": 0, 00:16:22.059 "data_size": 0 00:16:22.059 } 00:16:22.059 ] 00:16:22.059 }' 00:16:22.059 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.059 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.629 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:22.629 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.629 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.629 [2024-12-13 04:32:22.427069] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:22.629 [2024-12-13 04:32:22.427164] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:16:22.629 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.629 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:22.629 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.629 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.629 [2024-12-13 04:32:22.439054] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:22.629 [2024-12-13 04:32:22.439128] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:22.629 [2024-12-13 04:32:22.439153] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:22.629 [2024-12-13 04:32:22.439187] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:22.629 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.629 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:16:22.629 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.629 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.629 [2024-12-13 04:32:22.467313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:22.629 BaseBdev1 00:16:22.629 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.629 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:22.629 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:22.629 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:22.629 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:16:22.629 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:22.629 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:22.629 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:22.629 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.629 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.629 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.629 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:22.629 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.629 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.629 [ 00:16:22.629 { 00:16:22.629 "name": "BaseBdev1", 00:16:22.629 "aliases": [ 00:16:22.629 "d48267ac-780c-4033-bb56-97774ffdfd2a" 00:16:22.629 ], 00:16:22.629 "product_name": "Malloc disk", 00:16:22.629 "block_size": 4096, 00:16:22.629 "num_blocks": 8192, 00:16:22.629 "uuid": "d48267ac-780c-4033-bb56-97774ffdfd2a", 00:16:22.629 "md_size": 32, 00:16:22.629 "md_interleave": false, 00:16:22.629 "dif_type": 0, 00:16:22.629 "assigned_rate_limits": { 00:16:22.629 "rw_ios_per_sec": 0, 00:16:22.629 "rw_mbytes_per_sec": 0, 00:16:22.629 "r_mbytes_per_sec": 0, 00:16:22.629 "w_mbytes_per_sec": 0 00:16:22.629 }, 00:16:22.629 "claimed": true, 00:16:22.629 "claim_type": "exclusive_write", 00:16:22.629 "zoned": false, 00:16:22.629 "supported_io_types": { 00:16:22.629 "read": true, 00:16:22.629 "write": true, 00:16:22.629 "unmap": true, 00:16:22.629 "flush": true, 00:16:22.629 "reset": true, 00:16:22.629 "nvme_admin": false, 00:16:22.629 "nvme_io": false, 00:16:22.630 "nvme_io_md": false, 00:16:22.630 "write_zeroes": true, 00:16:22.630 "zcopy": true, 00:16:22.630 "get_zone_info": false, 00:16:22.630 "zone_management": false, 00:16:22.630 "zone_append": false, 00:16:22.630 "compare": false, 00:16:22.630 "compare_and_write": false, 00:16:22.630 "abort": true, 00:16:22.630 "seek_hole": false, 00:16:22.630 "seek_data": false, 00:16:22.630 "copy": true, 00:16:22.630 "nvme_iov_md": false 00:16:22.630 }, 00:16:22.630 "memory_domains": [ 00:16:22.630 { 00:16:22.630 "dma_device_id": "system", 00:16:22.630 "dma_device_type": 1 00:16:22.630 }, 00:16:22.630 { 00:16:22.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.630 "dma_device_type": 2 00:16:22.630 } 00:16:22.630 ], 00:16:22.630 "driver_specific": {} 00:16:22.630 } 00:16:22.630 ] 00:16:22.630 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.630 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:16:22.630 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:22.630 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:22.630 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:22.630 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:22.630 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:22.630 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:22.630 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.630 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.630 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.630 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.630 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.630 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.630 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.630 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.630 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.630 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.630 "name": "Existed_Raid", 00:16:22.630 "uuid": "c4c67926-ab9c-436e-ad69-de0f675ad3fd", 00:16:22.630 "strip_size_kb": 0, 00:16:22.630 "state": "configuring", 00:16:22.630 "raid_level": "raid1", 00:16:22.630 "superblock": true, 00:16:22.630 "num_base_bdevs": 2, 00:16:22.630 "num_base_bdevs_discovered": 1, 00:16:22.630 "num_base_bdevs_operational": 2, 00:16:22.630 "base_bdevs_list": [ 00:16:22.630 { 00:16:22.630 "name": "BaseBdev1", 00:16:22.630 "uuid": "d48267ac-780c-4033-bb56-97774ffdfd2a", 00:16:22.630 "is_configured": true, 00:16:22.630 "data_offset": 256, 00:16:22.630 "data_size": 7936 00:16:22.630 }, 00:16:22.630 { 00:16:22.630 "name": "BaseBdev2", 00:16:22.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.630 "is_configured": false, 00:16:22.630 "data_offset": 0, 00:16:22.630 "data_size": 0 00:16:22.630 } 00:16:22.630 ] 00:16:22.630 }' 00:16:22.630 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.630 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.200 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:23.200 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.200 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.200 [2024-12-13 04:32:22.938560] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:23.200 [2024-12-13 04:32:22.938640] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:16:23.200 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.200 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:23.200 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.200 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.200 [2024-12-13 04:32:22.950580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:23.200 [2024-12-13 04:32:22.952785] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:23.200 [2024-12-13 04:32:22.952859] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:23.200 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.200 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:23.200 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:23.200 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:23.200 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:23.200 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:23.200 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:23.200 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:23.200 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:23.200 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.200 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.200 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.200 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.200 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.200 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.200 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.200 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.200 04:32:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.200 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.200 "name": "Existed_Raid", 00:16:23.200 "uuid": "a84ee58c-fdeb-41ac-b022-425b4d3237e9", 00:16:23.200 "strip_size_kb": 0, 00:16:23.200 "state": "configuring", 00:16:23.200 "raid_level": "raid1", 00:16:23.200 "superblock": true, 00:16:23.200 "num_base_bdevs": 2, 00:16:23.200 "num_base_bdevs_discovered": 1, 00:16:23.200 "num_base_bdevs_operational": 2, 00:16:23.200 "base_bdevs_list": [ 00:16:23.200 { 00:16:23.200 "name": "BaseBdev1", 00:16:23.200 "uuid": "d48267ac-780c-4033-bb56-97774ffdfd2a", 00:16:23.200 "is_configured": true, 00:16:23.200 "data_offset": 256, 00:16:23.200 "data_size": 7936 00:16:23.200 }, 00:16:23.200 { 00:16:23.200 "name": "BaseBdev2", 00:16:23.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.200 "is_configured": false, 00:16:23.200 "data_offset": 0, 00:16:23.200 "data_size": 0 00:16:23.200 } 00:16:23.200 ] 00:16:23.200 }' 00:16:23.200 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.200 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.461 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:16:23.461 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.461 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.461 [2024-12-13 04:32:23.415951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:23.461 [2024-12-13 04:32:23.416246] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:16:23.461 [2024-12-13 04:32:23.416299] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:23.461 [2024-12-13 04:32:23.416432] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:16:23.461 [2024-12-13 04:32:23.416626] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:16:23.461 [2024-12-13 04:32:23.416688] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:16:23.461 [2024-12-13 04:32:23.416821] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:23.461 BaseBdev2 00:16:23.461 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.461 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:23.461 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:23.461 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:23.461 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:16:23.461 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:23.461 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:23.461 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:23.461 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.461 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.461 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.461 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:23.461 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.461 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.461 [ 00:16:23.461 { 00:16:23.461 "name": "BaseBdev2", 00:16:23.461 "aliases": [ 00:16:23.461 "99bd0352-254e-4b9c-a18b-46f53a1ec827" 00:16:23.461 ], 00:16:23.461 "product_name": "Malloc disk", 00:16:23.461 "block_size": 4096, 00:16:23.461 "num_blocks": 8192, 00:16:23.461 "uuid": "99bd0352-254e-4b9c-a18b-46f53a1ec827", 00:16:23.461 "md_size": 32, 00:16:23.461 "md_interleave": false, 00:16:23.461 "dif_type": 0, 00:16:23.461 "assigned_rate_limits": { 00:16:23.461 "rw_ios_per_sec": 0, 00:16:23.461 "rw_mbytes_per_sec": 0, 00:16:23.461 "r_mbytes_per_sec": 0, 00:16:23.461 "w_mbytes_per_sec": 0 00:16:23.461 }, 00:16:23.461 "claimed": true, 00:16:23.461 "claim_type": "exclusive_write", 00:16:23.461 "zoned": false, 00:16:23.461 "supported_io_types": { 00:16:23.461 "read": true, 00:16:23.461 "write": true, 00:16:23.461 "unmap": true, 00:16:23.461 "flush": true, 00:16:23.461 "reset": true, 00:16:23.461 "nvme_admin": false, 00:16:23.461 "nvme_io": false, 00:16:23.461 "nvme_io_md": false, 00:16:23.461 "write_zeroes": true, 00:16:23.461 "zcopy": true, 00:16:23.461 "get_zone_info": false, 00:16:23.461 "zone_management": false, 00:16:23.461 "zone_append": false, 00:16:23.461 "compare": false, 00:16:23.461 "compare_and_write": false, 00:16:23.461 "abort": true, 00:16:23.461 "seek_hole": false, 00:16:23.461 "seek_data": false, 00:16:23.461 "copy": true, 00:16:23.461 "nvme_iov_md": false 00:16:23.461 }, 00:16:23.461 "memory_domains": [ 00:16:23.461 { 00:16:23.461 "dma_device_id": "system", 00:16:23.461 "dma_device_type": 1 00:16:23.461 }, 00:16:23.461 { 00:16:23.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.461 "dma_device_type": 2 00:16:23.461 } 00:16:23.461 ], 00:16:23.461 "driver_specific": {} 00:16:23.461 } 00:16:23.461 ] 00:16:23.461 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.461 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:16:23.461 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:23.461 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:23.461 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:23.461 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:23.461 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:23.461 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:23.462 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:23.462 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:23.462 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.462 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.462 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.462 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.462 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.462 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.462 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.462 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.722 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.722 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.722 "name": "Existed_Raid", 00:16:23.722 "uuid": "a84ee58c-fdeb-41ac-b022-425b4d3237e9", 00:16:23.722 "strip_size_kb": 0, 00:16:23.722 "state": "online", 00:16:23.722 "raid_level": "raid1", 00:16:23.722 "superblock": true, 00:16:23.722 "num_base_bdevs": 2, 00:16:23.722 "num_base_bdevs_discovered": 2, 00:16:23.722 "num_base_bdevs_operational": 2, 00:16:23.722 "base_bdevs_list": [ 00:16:23.722 { 00:16:23.722 "name": "BaseBdev1", 00:16:23.722 "uuid": "d48267ac-780c-4033-bb56-97774ffdfd2a", 00:16:23.722 "is_configured": true, 00:16:23.722 "data_offset": 256, 00:16:23.722 "data_size": 7936 00:16:23.722 }, 00:16:23.722 { 00:16:23.722 "name": "BaseBdev2", 00:16:23.722 "uuid": "99bd0352-254e-4b9c-a18b-46f53a1ec827", 00:16:23.722 "is_configured": true, 00:16:23.722 "data_offset": 256, 00:16:23.722 "data_size": 7936 00:16:23.722 } 00:16:23.722 ] 00:16:23.722 }' 00:16:23.722 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.722 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.982 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:23.982 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:23.982 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:23.982 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:23.982 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:16:23.982 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:23.982 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:23.982 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:23.982 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.982 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.982 [2024-12-13 04:32:23.903523] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:23.982 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.982 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:23.982 "name": "Existed_Raid", 00:16:23.982 "aliases": [ 00:16:23.982 "a84ee58c-fdeb-41ac-b022-425b4d3237e9" 00:16:23.982 ], 00:16:23.982 "product_name": "Raid Volume", 00:16:23.982 "block_size": 4096, 00:16:23.982 "num_blocks": 7936, 00:16:23.982 "uuid": "a84ee58c-fdeb-41ac-b022-425b4d3237e9", 00:16:23.982 "md_size": 32, 00:16:23.982 "md_interleave": false, 00:16:23.982 "dif_type": 0, 00:16:23.982 "assigned_rate_limits": { 00:16:23.982 "rw_ios_per_sec": 0, 00:16:23.982 "rw_mbytes_per_sec": 0, 00:16:23.982 "r_mbytes_per_sec": 0, 00:16:23.982 "w_mbytes_per_sec": 0 00:16:23.982 }, 00:16:23.982 "claimed": false, 00:16:23.982 "zoned": false, 00:16:23.982 "supported_io_types": { 00:16:23.982 "read": true, 00:16:23.982 "write": true, 00:16:23.982 "unmap": false, 00:16:23.982 "flush": false, 00:16:23.982 "reset": true, 00:16:23.982 "nvme_admin": false, 00:16:23.982 "nvme_io": false, 00:16:23.982 "nvme_io_md": false, 00:16:23.982 "write_zeroes": true, 00:16:23.982 "zcopy": false, 00:16:23.982 "get_zone_info": false, 00:16:23.982 "zone_management": false, 00:16:23.982 "zone_append": false, 00:16:23.982 "compare": false, 00:16:23.982 "compare_and_write": false, 00:16:23.982 "abort": false, 00:16:23.982 "seek_hole": false, 00:16:23.982 "seek_data": false, 00:16:23.982 "copy": false, 00:16:23.982 "nvme_iov_md": false 00:16:23.982 }, 00:16:23.982 "memory_domains": [ 00:16:23.982 { 00:16:23.982 "dma_device_id": "system", 00:16:23.982 "dma_device_type": 1 00:16:23.982 }, 00:16:23.982 { 00:16:23.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.982 "dma_device_type": 2 00:16:23.982 }, 00:16:23.982 { 00:16:23.982 "dma_device_id": "system", 00:16:23.982 "dma_device_type": 1 00:16:23.982 }, 00:16:23.982 { 00:16:23.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.982 "dma_device_type": 2 00:16:23.982 } 00:16:23.982 ], 00:16:23.982 "driver_specific": { 00:16:23.982 "raid": { 00:16:23.982 "uuid": "a84ee58c-fdeb-41ac-b022-425b4d3237e9", 00:16:23.982 "strip_size_kb": 0, 00:16:23.982 "state": "online", 00:16:23.982 "raid_level": "raid1", 00:16:23.982 "superblock": true, 00:16:23.982 "num_base_bdevs": 2, 00:16:23.982 "num_base_bdevs_discovered": 2, 00:16:23.982 "num_base_bdevs_operational": 2, 00:16:23.982 "base_bdevs_list": [ 00:16:23.982 { 00:16:23.982 "name": "BaseBdev1", 00:16:23.982 "uuid": "d48267ac-780c-4033-bb56-97774ffdfd2a", 00:16:23.982 "is_configured": true, 00:16:23.982 "data_offset": 256, 00:16:23.982 "data_size": 7936 00:16:23.982 }, 00:16:23.982 { 00:16:23.982 "name": "BaseBdev2", 00:16:23.982 "uuid": "99bd0352-254e-4b9c-a18b-46f53a1ec827", 00:16:23.982 "is_configured": true, 00:16:23.982 "data_offset": 256, 00:16:23.982 "data_size": 7936 00:16:23.982 } 00:16:23.982 ] 00:16:23.982 } 00:16:23.982 } 00:16:23.982 }' 00:16:23.982 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:23.982 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:23.982 BaseBdev2' 00:16:23.982 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:23.982 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:16:23.982 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:23.982 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:23.982 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:23.982 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.982 04:32:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:24.242 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.242 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:24.242 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:24.242 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:24.242 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:24.242 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.242 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:24.242 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:24.242 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.242 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:24.242 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:24.242 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:24.242 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.242 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:24.242 [2024-12-13 04:32:24.091009] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:24.242 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.242 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:24.242 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:24.242 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:24.242 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:16:24.242 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:24.242 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:24.242 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:24.242 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:24.242 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:24.242 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:24.242 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:24.242 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.242 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.242 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.242 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.242 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.242 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.242 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:24.242 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.242 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.242 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.242 "name": "Existed_Raid", 00:16:24.242 "uuid": "a84ee58c-fdeb-41ac-b022-425b4d3237e9", 00:16:24.242 "strip_size_kb": 0, 00:16:24.242 "state": "online", 00:16:24.242 "raid_level": "raid1", 00:16:24.242 "superblock": true, 00:16:24.242 "num_base_bdevs": 2, 00:16:24.242 "num_base_bdevs_discovered": 1, 00:16:24.242 "num_base_bdevs_operational": 1, 00:16:24.242 "base_bdevs_list": [ 00:16:24.242 { 00:16:24.242 "name": null, 00:16:24.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.242 "is_configured": false, 00:16:24.242 "data_offset": 0, 00:16:24.242 "data_size": 7936 00:16:24.242 }, 00:16:24.242 { 00:16:24.242 "name": "BaseBdev2", 00:16:24.242 "uuid": "99bd0352-254e-4b9c-a18b-46f53a1ec827", 00:16:24.242 "is_configured": true, 00:16:24.242 "data_offset": 256, 00:16:24.242 "data_size": 7936 00:16:24.242 } 00:16:24.242 ] 00:16:24.242 }' 00:16:24.243 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.243 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:24.812 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:24.812 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:24.812 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:24.812 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.812 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.812 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:24.812 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.812 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:24.812 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:24.812 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:24.812 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.812 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:24.812 [2024-12-13 04:32:24.560602] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:24.812 [2024-12-13 04:32:24.560704] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:24.812 [2024-12-13 04:32:24.582938] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:24.812 [2024-12-13 04:32:24.583046] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:24.812 [2024-12-13 04:32:24.583090] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:16:24.812 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.812 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:24.812 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:24.812 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.812 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:24.812 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.812 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:24.812 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.812 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:24.812 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:24.812 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:24.812 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 99317 00:16:24.812 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 99317 ']' 00:16:24.812 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 99317 00:16:24.813 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:16:24.813 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:24.813 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99317 00:16:24.813 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:24.813 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:24.813 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99317' 00:16:24.813 killing process with pid 99317 00:16:24.813 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 99317 00:16:24.813 [2024-12-13 04:32:24.664159] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:24.813 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 99317 00:16:24.813 [2024-12-13 04:32:24.665751] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:25.073 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:16:25.073 00:16:25.073 real 0m3.936s 00:16:25.073 user 0m6.033s 00:16:25.073 sys 0m0.871s 00:16:25.073 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:25.073 ************************************ 00:16:25.073 END TEST raid_state_function_test_sb_md_separate 00:16:25.073 ************************************ 00:16:25.073 04:32:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:25.073 04:32:25 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:16:25.073 04:32:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:25.073 04:32:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:25.073 04:32:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:25.073 ************************************ 00:16:25.073 START TEST raid_superblock_test_md_separate 00:16:25.073 ************************************ 00:16:25.073 04:32:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:16:25.073 04:32:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:25.073 04:32:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:16:25.073 04:32:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:25.073 04:32:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:25.073 04:32:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:25.073 04:32:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:25.073 04:32:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:25.073 04:32:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:25.073 04:32:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:25.073 04:32:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:25.073 04:32:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:25.073 04:32:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:25.073 04:32:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:25.073 04:32:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:25.073 04:32:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:25.073 04:32:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=99559 00:16:25.073 04:32:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:25.073 04:32:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 99559 00:16:25.073 04:32:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 99559 ']' 00:16:25.073 04:32:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.073 04:32:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:25.073 04:32:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.073 04:32:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:25.337 04:32:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:25.337 [2024-12-13 04:32:25.166132] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:16:25.337 [2024-12-13 04:32:25.166369] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99559 ] 00:16:25.337 [2024-12-13 04:32:25.318896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.598 [2024-12-13 04:32:25.356685] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.598 [2024-12-13 04:32:25.434079] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:25.598 [2024-12-13 04:32:25.434229] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:26.167 04:32:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:26.167 04:32:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:16:26.167 04:32:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:26.167 04:32:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:26.167 04:32:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:26.167 04:32:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:26.167 04:32:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:26.167 04:32:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:26.167 04:32:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:26.167 04:32:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:26.167 04:32:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:16:26.167 04:32:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.167 04:32:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.167 malloc1 00:16:26.167 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.167 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:26.168 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.168 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.168 [2024-12-13 04:32:26.022275] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:26.168 [2024-12-13 04:32:26.022420] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.168 [2024-12-13 04:32:26.022477] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:16:26.168 [2024-12-13 04:32:26.022528] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.168 [2024-12-13 04:32:26.024824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.168 [2024-12-13 04:32:26.024905] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:26.168 pt1 00:16:26.168 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.168 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:26.168 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:26.168 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:26.168 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:26.168 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:26.168 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:26.168 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:26.168 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:26.168 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:16:26.168 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.168 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.168 malloc2 00:16:26.168 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.168 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:26.168 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.168 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.168 [2024-12-13 04:32:26.062205] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:26.168 [2024-12-13 04:32:26.062304] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.168 [2024-12-13 04:32:26.062341] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:26.168 [2024-12-13 04:32:26.062370] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.168 [2024-12-13 04:32:26.064646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.168 [2024-12-13 04:32:26.064719] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:26.168 pt2 00:16:26.168 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.168 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:26.168 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:26.168 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:16:26.168 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.168 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.168 [2024-12-13 04:32:26.074214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:26.168 [2024-12-13 04:32:26.076349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:26.168 [2024-12-13 04:32:26.076540] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:16:26.168 [2024-12-13 04:32:26.076563] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:26.168 [2024-12-13 04:32:26.076652] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:16:26.168 [2024-12-13 04:32:26.076770] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:16:26.168 [2024-12-13 04:32:26.076780] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:16:26.168 [2024-12-13 04:32:26.076868] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:26.168 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.168 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:26.168 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:26.168 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.168 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:26.168 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:26.168 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:26.168 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.168 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.168 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.168 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.168 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.168 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.168 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.168 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.168 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.168 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.168 "name": "raid_bdev1", 00:16:26.168 "uuid": "9ce3708a-00b6-456e-b2a4-04c8d7a68aba", 00:16:26.168 "strip_size_kb": 0, 00:16:26.168 "state": "online", 00:16:26.168 "raid_level": "raid1", 00:16:26.168 "superblock": true, 00:16:26.168 "num_base_bdevs": 2, 00:16:26.168 "num_base_bdevs_discovered": 2, 00:16:26.168 "num_base_bdevs_operational": 2, 00:16:26.168 "base_bdevs_list": [ 00:16:26.168 { 00:16:26.168 "name": "pt1", 00:16:26.168 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:26.168 "is_configured": true, 00:16:26.168 "data_offset": 256, 00:16:26.168 "data_size": 7936 00:16:26.168 }, 00:16:26.168 { 00:16:26.168 "name": "pt2", 00:16:26.168 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:26.168 "is_configured": true, 00:16:26.168 "data_offset": 256, 00:16:26.168 "data_size": 7936 00:16:26.168 } 00:16:26.168 ] 00:16:26.168 }' 00:16:26.168 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.168 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.739 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:26.739 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:26.739 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:26.739 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:26.739 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:16:26.739 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:26.739 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:26.739 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:26.739 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.739 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.739 [2024-12-13 04:32:26.525750] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:26.739 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.739 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:26.739 "name": "raid_bdev1", 00:16:26.739 "aliases": [ 00:16:26.739 "9ce3708a-00b6-456e-b2a4-04c8d7a68aba" 00:16:26.739 ], 00:16:26.739 "product_name": "Raid Volume", 00:16:26.739 "block_size": 4096, 00:16:26.739 "num_blocks": 7936, 00:16:26.739 "uuid": "9ce3708a-00b6-456e-b2a4-04c8d7a68aba", 00:16:26.739 "md_size": 32, 00:16:26.739 "md_interleave": false, 00:16:26.739 "dif_type": 0, 00:16:26.739 "assigned_rate_limits": { 00:16:26.739 "rw_ios_per_sec": 0, 00:16:26.739 "rw_mbytes_per_sec": 0, 00:16:26.739 "r_mbytes_per_sec": 0, 00:16:26.739 "w_mbytes_per_sec": 0 00:16:26.739 }, 00:16:26.739 "claimed": false, 00:16:26.739 "zoned": false, 00:16:26.739 "supported_io_types": { 00:16:26.739 "read": true, 00:16:26.739 "write": true, 00:16:26.739 "unmap": false, 00:16:26.739 "flush": false, 00:16:26.739 "reset": true, 00:16:26.739 "nvme_admin": false, 00:16:26.739 "nvme_io": false, 00:16:26.739 "nvme_io_md": false, 00:16:26.739 "write_zeroes": true, 00:16:26.739 "zcopy": false, 00:16:26.739 "get_zone_info": false, 00:16:26.739 "zone_management": false, 00:16:26.739 "zone_append": false, 00:16:26.739 "compare": false, 00:16:26.739 "compare_and_write": false, 00:16:26.739 "abort": false, 00:16:26.739 "seek_hole": false, 00:16:26.739 "seek_data": false, 00:16:26.739 "copy": false, 00:16:26.739 "nvme_iov_md": false 00:16:26.739 }, 00:16:26.739 "memory_domains": [ 00:16:26.739 { 00:16:26.739 "dma_device_id": "system", 00:16:26.739 "dma_device_type": 1 00:16:26.739 }, 00:16:26.739 { 00:16:26.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.739 "dma_device_type": 2 00:16:26.739 }, 00:16:26.739 { 00:16:26.739 "dma_device_id": "system", 00:16:26.739 "dma_device_type": 1 00:16:26.739 }, 00:16:26.739 { 00:16:26.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.739 "dma_device_type": 2 00:16:26.739 } 00:16:26.739 ], 00:16:26.739 "driver_specific": { 00:16:26.739 "raid": { 00:16:26.739 "uuid": "9ce3708a-00b6-456e-b2a4-04c8d7a68aba", 00:16:26.739 "strip_size_kb": 0, 00:16:26.739 "state": "online", 00:16:26.739 "raid_level": "raid1", 00:16:26.739 "superblock": true, 00:16:26.739 "num_base_bdevs": 2, 00:16:26.739 "num_base_bdevs_discovered": 2, 00:16:26.739 "num_base_bdevs_operational": 2, 00:16:26.739 "base_bdevs_list": [ 00:16:26.739 { 00:16:26.739 "name": "pt1", 00:16:26.739 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:26.739 "is_configured": true, 00:16:26.739 "data_offset": 256, 00:16:26.739 "data_size": 7936 00:16:26.739 }, 00:16:26.739 { 00:16:26.739 "name": "pt2", 00:16:26.739 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:26.739 "is_configured": true, 00:16:26.739 "data_offset": 256, 00:16:26.739 "data_size": 7936 00:16:26.739 } 00:16:26.739 ] 00:16:26.739 } 00:16:26.739 } 00:16:26.739 }' 00:16:26.739 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:26.739 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:26.739 pt2' 00:16:26.739 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:26.739 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:16:26.739 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:26.739 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:26.739 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:26.739 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.739 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.739 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.739 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:26.739 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:26.739 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:26.739 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:26.739 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:26.739 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.739 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.739 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.739 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:26.739 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:26.739 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:26.739 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:26.739 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.739 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.739 [2024-12-13 04:32:26.737259] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:27.000 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.000 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9ce3708a-00b6-456e-b2a4-04c8d7a68aba 00:16:27.000 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 9ce3708a-00b6-456e-b2a4-04c8d7a68aba ']' 00:16:27.000 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:27.000 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.000 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:27.000 [2024-12-13 04:32:26.769011] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:27.000 [2024-12-13 04:32:26.769078] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:27.000 [2024-12-13 04:32:26.769166] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:27.000 [2024-12-13 04:32:26.769245] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:27.000 [2024-12-13 04:32:26.769256] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:16:27.000 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.000 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.000 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.000 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:27.000 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:27.000 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.000 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:27.000 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:27.000 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:27.000 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:27.000 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.000 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:27.000 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.000 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:27.000 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:27.000 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.000 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:27.000 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.000 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:27.000 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.000 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:27.000 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:27.000 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.000 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:27.000 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:27.000 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:16:27.001 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:27.001 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:27.001 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:27.001 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:27.001 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:27.001 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:27.001 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.001 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:27.001 [2024-12-13 04:32:26.908756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:27.001 [2024-12-13 04:32:26.910865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:27.001 [2024-12-13 04:32:26.910956] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:27.001 [2024-12-13 04:32:26.911035] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:27.001 [2024-12-13 04:32:26.911095] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:27.001 [2024-12-13 04:32:26.911134] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:16:27.001 request: 00:16:27.001 { 00:16:27.001 "name": "raid_bdev1", 00:16:27.001 "raid_level": "raid1", 00:16:27.001 "base_bdevs": [ 00:16:27.001 "malloc1", 00:16:27.001 "malloc2" 00:16:27.001 ], 00:16:27.001 "superblock": false, 00:16:27.001 "method": "bdev_raid_create", 00:16:27.001 "req_id": 1 00:16:27.001 } 00:16:27.001 Got JSON-RPC error response 00:16:27.001 response: 00:16:27.001 { 00:16:27.001 "code": -17, 00:16:27.001 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:27.001 } 00:16:27.001 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:27.001 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:16:27.001 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:27.001 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:27.001 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:27.001 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.001 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:27.001 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.001 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:27.001 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.001 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:27.001 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:27.001 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:27.001 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.001 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:27.001 [2024-12-13 04:32:26.976620] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:27.001 [2024-12-13 04:32:26.976666] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:27.001 [2024-12-13 04:32:26.976683] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:27.001 [2024-12-13 04:32:26.976691] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:27.001 [2024-12-13 04:32:26.978883] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:27.001 [2024-12-13 04:32:26.978915] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:27.001 [2024-12-13 04:32:26.978965] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:27.001 [2024-12-13 04:32:26.978995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:27.001 pt1 00:16:27.001 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.001 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:27.001 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:27.001 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:27.001 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:27.001 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:27.001 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:27.001 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.001 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.001 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.001 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.001 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.001 04:32:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.001 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.001 04:32:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:27.001 04:32:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.261 04:32:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.261 "name": "raid_bdev1", 00:16:27.261 "uuid": "9ce3708a-00b6-456e-b2a4-04c8d7a68aba", 00:16:27.261 "strip_size_kb": 0, 00:16:27.261 "state": "configuring", 00:16:27.261 "raid_level": "raid1", 00:16:27.261 "superblock": true, 00:16:27.261 "num_base_bdevs": 2, 00:16:27.261 "num_base_bdevs_discovered": 1, 00:16:27.261 "num_base_bdevs_operational": 2, 00:16:27.261 "base_bdevs_list": [ 00:16:27.261 { 00:16:27.261 "name": "pt1", 00:16:27.261 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:27.261 "is_configured": true, 00:16:27.261 "data_offset": 256, 00:16:27.261 "data_size": 7936 00:16:27.261 }, 00:16:27.261 { 00:16:27.261 "name": null, 00:16:27.261 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:27.261 "is_configured": false, 00:16:27.261 "data_offset": 256, 00:16:27.261 "data_size": 7936 00:16:27.261 } 00:16:27.261 ] 00:16:27.261 }' 00:16:27.261 04:32:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.261 04:32:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:27.521 04:32:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:16:27.521 04:32:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:27.521 04:32:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:27.521 04:32:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:27.521 04:32:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.521 04:32:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:27.521 [2024-12-13 04:32:27.476120] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:27.521 [2024-12-13 04:32:27.476164] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:27.521 [2024-12-13 04:32:27.476183] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:27.521 [2024-12-13 04:32:27.476192] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:27.521 [2024-12-13 04:32:27.476323] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:27.521 [2024-12-13 04:32:27.476336] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:27.521 [2024-12-13 04:32:27.476373] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:27.521 [2024-12-13 04:32:27.476396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:27.521 [2024-12-13 04:32:27.476497] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:16:27.521 [2024-12-13 04:32:27.476506] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:27.521 [2024-12-13 04:32:27.476576] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:16:27.521 [2024-12-13 04:32:27.476656] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:16:27.521 [2024-12-13 04:32:27.476670] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:16:27.521 [2024-12-13 04:32:27.476727] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.521 pt2 00:16:27.521 04:32:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.521 04:32:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:27.521 04:32:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:27.521 04:32:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:27.521 04:32:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:27.521 04:32:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:27.521 04:32:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:27.521 04:32:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:27.521 04:32:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:27.521 04:32:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.521 04:32:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.521 04:32:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.521 04:32:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.521 04:32:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.521 04:32:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.521 04:32:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.521 04:32:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:27.522 04:32:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.522 04:32:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.522 "name": "raid_bdev1", 00:16:27.522 "uuid": "9ce3708a-00b6-456e-b2a4-04c8d7a68aba", 00:16:27.522 "strip_size_kb": 0, 00:16:27.522 "state": "online", 00:16:27.522 "raid_level": "raid1", 00:16:27.522 "superblock": true, 00:16:27.522 "num_base_bdevs": 2, 00:16:27.522 "num_base_bdevs_discovered": 2, 00:16:27.522 "num_base_bdevs_operational": 2, 00:16:27.522 "base_bdevs_list": [ 00:16:27.522 { 00:16:27.522 "name": "pt1", 00:16:27.522 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:27.522 "is_configured": true, 00:16:27.522 "data_offset": 256, 00:16:27.522 "data_size": 7936 00:16:27.522 }, 00:16:27.522 { 00:16:27.522 "name": "pt2", 00:16:27.522 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:27.522 "is_configured": true, 00:16:27.522 "data_offset": 256, 00:16:27.522 "data_size": 7936 00:16:27.522 } 00:16:27.522 ] 00:16:27.522 }' 00:16:27.522 04:32:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.522 04:32:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:28.117 04:32:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:28.117 04:32:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:28.117 04:32:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:28.117 04:32:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:28.117 04:32:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:16:28.117 04:32:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:28.117 04:32:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:28.117 04:32:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:28.117 04:32:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.117 04:32:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:28.117 [2024-12-13 04:32:27.887691] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:28.117 04:32:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.117 04:32:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:28.117 "name": "raid_bdev1", 00:16:28.117 "aliases": [ 00:16:28.117 "9ce3708a-00b6-456e-b2a4-04c8d7a68aba" 00:16:28.117 ], 00:16:28.117 "product_name": "Raid Volume", 00:16:28.117 "block_size": 4096, 00:16:28.117 "num_blocks": 7936, 00:16:28.117 "uuid": "9ce3708a-00b6-456e-b2a4-04c8d7a68aba", 00:16:28.117 "md_size": 32, 00:16:28.117 "md_interleave": false, 00:16:28.117 "dif_type": 0, 00:16:28.117 "assigned_rate_limits": { 00:16:28.117 "rw_ios_per_sec": 0, 00:16:28.117 "rw_mbytes_per_sec": 0, 00:16:28.117 "r_mbytes_per_sec": 0, 00:16:28.117 "w_mbytes_per_sec": 0 00:16:28.117 }, 00:16:28.117 "claimed": false, 00:16:28.117 "zoned": false, 00:16:28.117 "supported_io_types": { 00:16:28.117 "read": true, 00:16:28.117 "write": true, 00:16:28.117 "unmap": false, 00:16:28.117 "flush": false, 00:16:28.117 "reset": true, 00:16:28.117 "nvme_admin": false, 00:16:28.117 "nvme_io": false, 00:16:28.117 "nvme_io_md": false, 00:16:28.117 "write_zeroes": true, 00:16:28.117 "zcopy": false, 00:16:28.117 "get_zone_info": false, 00:16:28.117 "zone_management": false, 00:16:28.117 "zone_append": false, 00:16:28.117 "compare": false, 00:16:28.117 "compare_and_write": false, 00:16:28.117 "abort": false, 00:16:28.117 "seek_hole": false, 00:16:28.117 "seek_data": false, 00:16:28.117 "copy": false, 00:16:28.117 "nvme_iov_md": false 00:16:28.117 }, 00:16:28.117 "memory_domains": [ 00:16:28.117 { 00:16:28.117 "dma_device_id": "system", 00:16:28.117 "dma_device_type": 1 00:16:28.117 }, 00:16:28.117 { 00:16:28.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.117 "dma_device_type": 2 00:16:28.117 }, 00:16:28.117 { 00:16:28.117 "dma_device_id": "system", 00:16:28.117 "dma_device_type": 1 00:16:28.117 }, 00:16:28.117 { 00:16:28.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.117 "dma_device_type": 2 00:16:28.117 } 00:16:28.117 ], 00:16:28.117 "driver_specific": { 00:16:28.117 "raid": { 00:16:28.117 "uuid": "9ce3708a-00b6-456e-b2a4-04c8d7a68aba", 00:16:28.117 "strip_size_kb": 0, 00:16:28.117 "state": "online", 00:16:28.117 "raid_level": "raid1", 00:16:28.117 "superblock": true, 00:16:28.117 "num_base_bdevs": 2, 00:16:28.117 "num_base_bdevs_discovered": 2, 00:16:28.117 "num_base_bdevs_operational": 2, 00:16:28.117 "base_bdevs_list": [ 00:16:28.117 { 00:16:28.117 "name": "pt1", 00:16:28.117 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:28.117 "is_configured": true, 00:16:28.117 "data_offset": 256, 00:16:28.117 "data_size": 7936 00:16:28.117 }, 00:16:28.117 { 00:16:28.117 "name": "pt2", 00:16:28.117 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:28.117 "is_configured": true, 00:16:28.117 "data_offset": 256, 00:16:28.117 "data_size": 7936 00:16:28.117 } 00:16:28.117 ] 00:16:28.117 } 00:16:28.117 } 00:16:28.117 }' 00:16:28.117 04:32:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:28.117 04:32:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:28.117 pt2' 00:16:28.117 04:32:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:28.117 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:16:28.117 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:28.117 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:28.117 04:32:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.117 04:32:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:28.117 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:28.117 04:32:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.117 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:28.117 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:28.117 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:28.117 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:28.117 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:28.117 04:32:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.117 04:32:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:28.117 04:32:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.118 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:28.118 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:28.118 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:28.118 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:28.118 04:32:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.118 04:32:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:28.118 [2024-12-13 04:32:28.123235] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:28.388 04:32:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.388 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 9ce3708a-00b6-456e-b2a4-04c8d7a68aba '!=' 9ce3708a-00b6-456e-b2a4-04c8d7a68aba ']' 00:16:28.388 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:28.388 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:28.388 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:16:28.388 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:28.388 04:32:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.388 04:32:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:28.388 [2024-12-13 04:32:28.166976] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:28.388 04:32:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.388 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:28.388 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:28.388 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:28.388 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:28.388 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:28.388 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:28.388 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.388 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.388 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.388 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.388 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.388 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.388 04:32:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.388 04:32:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:28.388 04:32:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.388 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.388 "name": "raid_bdev1", 00:16:28.388 "uuid": "9ce3708a-00b6-456e-b2a4-04c8d7a68aba", 00:16:28.388 "strip_size_kb": 0, 00:16:28.388 "state": "online", 00:16:28.388 "raid_level": "raid1", 00:16:28.388 "superblock": true, 00:16:28.388 "num_base_bdevs": 2, 00:16:28.388 "num_base_bdevs_discovered": 1, 00:16:28.388 "num_base_bdevs_operational": 1, 00:16:28.388 "base_bdevs_list": [ 00:16:28.388 { 00:16:28.388 "name": null, 00:16:28.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.388 "is_configured": false, 00:16:28.388 "data_offset": 0, 00:16:28.388 "data_size": 7936 00:16:28.388 }, 00:16:28.389 { 00:16:28.389 "name": "pt2", 00:16:28.389 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:28.389 "is_configured": true, 00:16:28.389 "data_offset": 256, 00:16:28.389 "data_size": 7936 00:16:28.389 } 00:16:28.389 ] 00:16:28.389 }' 00:16:28.389 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.389 04:32:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:28.648 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:28.648 04:32:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.648 04:32:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:28.648 [2024-12-13 04:32:28.642123] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:28.648 [2024-12-13 04:32:28.642202] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:28.648 [2024-12-13 04:32:28.642285] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:28.648 [2024-12-13 04:32:28.642341] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:28.648 [2024-12-13 04:32:28.642404] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:16:28.648 04:32:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.648 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.648 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:28.648 04:32:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.648 04:32:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:28.648 04:32:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.909 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:28.909 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:28.909 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:28.909 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:28.909 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:28.909 04:32:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.909 04:32:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:28.909 04:32:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.909 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:28.909 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:28.909 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:28.909 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:28.909 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:16:28.909 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:28.909 04:32:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.909 04:32:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:28.909 [2024-12-13 04:32:28.714009] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:28.909 [2024-12-13 04:32:28.714052] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.909 [2024-12-13 04:32:28.714067] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:16:28.909 [2024-12-13 04:32:28.714075] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.909 [2024-12-13 04:32:28.716235] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.909 [2024-12-13 04:32:28.716281] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:28.909 [2024-12-13 04:32:28.716335] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:28.909 [2024-12-13 04:32:28.716362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:28.909 [2024-12-13 04:32:28.716415] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:16:28.909 [2024-12-13 04:32:28.716422] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:28.909 [2024-12-13 04:32:28.716521] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:16:28.909 [2024-12-13 04:32:28.716593] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:16:28.909 [2024-12-13 04:32:28.716603] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:16:28.909 [2024-12-13 04:32:28.716659] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:28.909 pt2 00:16:28.909 04:32:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.909 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:28.909 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:28.909 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:28.909 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:28.909 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:28.909 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:28.909 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.909 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.909 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.909 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.909 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.909 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.909 04:32:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.909 04:32:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:28.909 04:32:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.909 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.909 "name": "raid_bdev1", 00:16:28.909 "uuid": "9ce3708a-00b6-456e-b2a4-04c8d7a68aba", 00:16:28.909 "strip_size_kb": 0, 00:16:28.909 "state": "online", 00:16:28.909 "raid_level": "raid1", 00:16:28.909 "superblock": true, 00:16:28.909 "num_base_bdevs": 2, 00:16:28.909 "num_base_bdevs_discovered": 1, 00:16:28.909 "num_base_bdevs_operational": 1, 00:16:28.909 "base_bdevs_list": [ 00:16:28.909 { 00:16:28.909 "name": null, 00:16:28.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.909 "is_configured": false, 00:16:28.909 "data_offset": 256, 00:16:28.909 "data_size": 7936 00:16:28.909 }, 00:16:28.909 { 00:16:28.909 "name": "pt2", 00:16:28.909 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:28.909 "is_configured": true, 00:16:28.909 "data_offset": 256, 00:16:28.909 "data_size": 7936 00:16:28.909 } 00:16:28.909 ] 00:16:28.909 }' 00:16:28.909 04:32:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.909 04:32:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:29.169 04:32:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:29.169 04:32:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.169 04:32:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:29.169 [2024-12-13 04:32:29.137299] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:29.169 [2024-12-13 04:32:29.137369] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:29.169 [2024-12-13 04:32:29.137433] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:29.169 [2024-12-13 04:32:29.137500] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:29.169 [2024-12-13 04:32:29.137599] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:16:29.169 04:32:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.169 04:32:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.169 04:32:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:29.169 04:32:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.169 04:32:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:29.169 04:32:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.429 04:32:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:29.429 04:32:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:29.429 04:32:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:16:29.429 04:32:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:29.429 04:32:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.429 04:32:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:29.429 [2024-12-13 04:32:29.197226] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:29.429 [2024-12-13 04:32:29.197350] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.429 [2024-12-13 04:32:29.197384] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:16:29.429 [2024-12-13 04:32:29.197425] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.429 [2024-12-13 04:32:29.199624] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.429 [2024-12-13 04:32:29.199713] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:29.429 [2024-12-13 04:32:29.199775] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:29.429 [2024-12-13 04:32:29.199824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:29.429 [2024-12-13 04:32:29.199959] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:29.429 [2024-12-13 04:32:29.200001] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:29.429 [2024-12-13 04:32:29.200016] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:16:29.429 [2024-12-13 04:32:29.200053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:29.429 [2024-12-13 04:32:29.200108] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:16:29.429 [2024-12-13 04:32:29.200118] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:29.429 [2024-12-13 04:32:29.200173] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:16:29.429 [2024-12-13 04:32:29.200250] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:16:29.429 [2024-12-13 04:32:29.200260] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:16:29.429 [2024-12-13 04:32:29.200333] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:29.429 pt1 00:16:29.429 04:32:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.429 04:32:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:16:29.429 04:32:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:29.429 04:32:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:29.429 04:32:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:29.429 04:32:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:29.429 04:32:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:29.429 04:32:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:29.429 04:32:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.429 04:32:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.429 04:32:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.429 04:32:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.429 04:32:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.429 04:32:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.429 04:32:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.429 04:32:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:29.429 04:32:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.429 04:32:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.429 "name": "raid_bdev1", 00:16:29.429 "uuid": "9ce3708a-00b6-456e-b2a4-04c8d7a68aba", 00:16:29.429 "strip_size_kb": 0, 00:16:29.429 "state": "online", 00:16:29.429 "raid_level": "raid1", 00:16:29.429 "superblock": true, 00:16:29.429 "num_base_bdevs": 2, 00:16:29.429 "num_base_bdevs_discovered": 1, 00:16:29.429 "num_base_bdevs_operational": 1, 00:16:29.429 "base_bdevs_list": [ 00:16:29.429 { 00:16:29.429 "name": null, 00:16:29.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.429 "is_configured": false, 00:16:29.429 "data_offset": 256, 00:16:29.429 "data_size": 7936 00:16:29.429 }, 00:16:29.429 { 00:16:29.429 "name": "pt2", 00:16:29.429 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:29.429 "is_configured": true, 00:16:29.429 "data_offset": 256, 00:16:29.429 "data_size": 7936 00:16:29.429 } 00:16:29.429 ] 00:16:29.429 }' 00:16:29.429 04:32:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.429 04:32:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:29.689 04:32:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:29.689 04:32:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:29.689 04:32:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.689 04:32:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:29.689 04:32:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.689 04:32:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:29.689 04:32:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:29.689 04:32:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.689 04:32:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:29.689 04:32:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:29.689 [2024-12-13 04:32:29.656778] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:29.689 04:32:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.689 04:32:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 9ce3708a-00b6-456e-b2a4-04c8d7a68aba '!=' 9ce3708a-00b6-456e-b2a4-04c8d7a68aba ']' 00:16:29.689 04:32:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 99559 00:16:29.689 04:32:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 99559 ']' 00:16:29.689 04:32:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 99559 00:16:29.689 04:32:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:16:29.949 04:32:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:29.949 04:32:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99559 00:16:29.949 04:32:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:29.949 killing process with pid 99559 00:16:29.949 04:32:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:29.949 04:32:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99559' 00:16:29.949 04:32:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 99559 00:16:29.949 [2024-12-13 04:32:29.729799] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:29.949 [2024-12-13 04:32:29.729852] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:29.949 [2024-12-13 04:32:29.729887] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:29.949 [2024-12-13 04:32:29.729895] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:16:29.949 04:32:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 99559 00:16:29.949 [2024-12-13 04:32:29.772818] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:30.210 04:32:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:16:30.210 00:16:30.210 real 0m5.021s 00:16:30.210 user 0m8.007s 00:16:30.210 sys 0m1.178s 00:16:30.210 04:32:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:30.210 ************************************ 00:16:30.210 END TEST raid_superblock_test_md_separate 00:16:30.210 ************************************ 00:16:30.210 04:32:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.210 04:32:30 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:16:30.210 04:32:30 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:16:30.210 04:32:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:30.210 04:32:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:30.210 04:32:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:30.210 ************************************ 00:16:30.210 START TEST raid_rebuild_test_sb_md_separate 00:16:30.210 ************************************ 00:16:30.210 04:32:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:16:30.210 04:32:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:30.210 04:32:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:30.210 04:32:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:30.210 04:32:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:30.210 04:32:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:30.210 04:32:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:30.210 04:32:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:30.210 04:32:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:30.210 04:32:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:30.210 04:32:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:30.210 04:32:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:30.210 04:32:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:30.210 04:32:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:30.210 04:32:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:30.210 04:32:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:30.210 04:32:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:30.210 04:32:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:30.210 04:32:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:30.210 04:32:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:30.210 04:32:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:30.210 04:32:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:30.210 04:32:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:30.210 04:32:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:30.210 04:32:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:30.210 04:32:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=99882 00:16:30.210 04:32:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 99882 00:16:30.210 04:32:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:30.210 04:32:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 99882 ']' 00:16:30.210 04:32:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.210 04:32:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:30.210 04:32:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.210 04:32:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:30.210 04:32:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.471 [2024-12-13 04:32:30.282676] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:16:30.471 [2024-12-13 04:32:30.282906] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99882 ] 00:16:30.471 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:30.471 Zero copy mechanism will not be used. 00:16:30.471 [2024-12-13 04:32:30.438590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.471 [2024-12-13 04:32:30.476311] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.731 [2024-12-13 04:32:30.553731] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:30.731 [2024-12-13 04:32:30.553868] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:31.301 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:31.301 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:16:31.301 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:31.301 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:16:31.301 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.301 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.301 BaseBdev1_malloc 00:16:31.301 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.301 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:31.301 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.301 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.301 [2024-12-13 04:32:31.132799] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:31.301 [2024-12-13 04:32:31.132938] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:31.301 [2024-12-13 04:32:31.132992] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:16:31.301 [2024-12-13 04:32:31.133036] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:31.301 [2024-12-13 04:32:31.135291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:31.301 [2024-12-13 04:32:31.135363] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:31.301 BaseBdev1 00:16:31.301 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.301 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:31.302 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:16:31.302 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.302 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.302 BaseBdev2_malloc 00:16:31.302 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.302 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:31.302 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.302 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.302 [2024-12-13 04:32:31.168675] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:31.302 [2024-12-13 04:32:31.168778] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:31.302 [2024-12-13 04:32:31.168825] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:31.302 [2024-12-13 04:32:31.168856] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:31.302 [2024-12-13 04:32:31.171050] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:31.302 [2024-12-13 04:32:31.171122] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:31.302 BaseBdev2 00:16:31.302 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.302 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:16:31.302 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.302 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.302 spare_malloc 00:16:31.302 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.302 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:31.302 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.302 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.302 spare_delay 00:16:31.302 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.302 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:31.302 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.302 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.302 [2024-12-13 04:32:31.233933] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:31.302 [2024-12-13 04:32:31.233998] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:31.302 [2024-12-13 04:32:31.234031] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:31.302 [2024-12-13 04:32:31.234043] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:31.302 [2024-12-13 04:32:31.237050] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:31.302 [2024-12-13 04:32:31.237160] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:31.302 spare 00:16:31.302 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.302 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:31.302 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.302 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.302 [2024-12-13 04:32:31.246068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:31.302 [2024-12-13 04:32:31.248249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:31.302 [2024-12-13 04:32:31.248486] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:16:31.302 [2024-12-13 04:32:31.248505] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:31.302 [2024-12-13 04:32:31.248598] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:16:31.302 [2024-12-13 04:32:31.248717] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:16:31.302 [2024-12-13 04:32:31.248735] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:16:31.302 [2024-12-13 04:32:31.248827] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.302 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.302 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:31.302 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:31.302 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.302 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:31.302 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:31.302 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:31.302 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.302 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.302 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.302 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.302 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.302 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.302 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.302 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.302 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.302 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.302 "name": "raid_bdev1", 00:16:31.302 "uuid": "bd82aef4-bf02-405a-8227-5299bc0a5a72", 00:16:31.302 "strip_size_kb": 0, 00:16:31.302 "state": "online", 00:16:31.302 "raid_level": "raid1", 00:16:31.302 "superblock": true, 00:16:31.302 "num_base_bdevs": 2, 00:16:31.302 "num_base_bdevs_discovered": 2, 00:16:31.302 "num_base_bdevs_operational": 2, 00:16:31.302 "base_bdevs_list": [ 00:16:31.302 { 00:16:31.302 "name": "BaseBdev1", 00:16:31.302 "uuid": "50565a37-732f-5cc0-b8aa-ac44ababe28b", 00:16:31.302 "is_configured": true, 00:16:31.302 "data_offset": 256, 00:16:31.302 "data_size": 7936 00:16:31.302 }, 00:16:31.302 { 00:16:31.302 "name": "BaseBdev2", 00:16:31.302 "uuid": "e2ddee3d-b9dd-52ed-bcfa-514067999703", 00:16:31.302 "is_configured": true, 00:16:31.302 "data_offset": 256, 00:16:31.302 "data_size": 7936 00:16:31.302 } 00:16:31.302 ] 00:16:31.302 }' 00:16:31.302 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.302 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.872 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:31.872 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.872 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.872 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:31.872 [2024-12-13 04:32:31.669588] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:31.872 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.872 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:16:31.872 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.872 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:31.872 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.872 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.872 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.872 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:16:31.872 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:31.872 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:31.872 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:31.872 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:31.872 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:31.872 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:31.872 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:31.872 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:31.872 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:31.872 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:16:31.872 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:31.872 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:31.872 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:32.132 [2024-12-13 04:32:31.940887] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:16:32.132 /dev/nbd0 00:16:32.132 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:32.132 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:32.132 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:32.132 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:16:32.132 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:32.132 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:32.132 04:32:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:32.132 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:16:32.132 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:32.132 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:32.132 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:32.132 1+0 records in 00:16:32.132 1+0 records out 00:16:32.132 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000552877 s, 7.4 MB/s 00:16:32.132 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:32.132 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:16:32.132 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:32.132 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:32.132 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:16:32.132 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:32.132 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:32.132 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:32.132 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:32.132 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:16:32.703 7936+0 records in 00:16:32.703 7936+0 records out 00:16:32.703 32505856 bytes (33 MB, 31 MiB) copied, 0.626344 s, 51.9 MB/s 00:16:32.703 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:32.703 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:32.703 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:32.703 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:32.703 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:16:32.703 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:32.703 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:32.963 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:32.963 [2024-12-13 04:32:32.863466] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:32.963 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:32.963 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:32.963 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:32.963 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:32.963 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:32.963 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:16:32.963 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:16:32.963 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:32.963 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.963 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:32.963 [2024-12-13 04:32:32.879535] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:32.963 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.963 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:32.963 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:32.963 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.963 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:32.963 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:32.963 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:32.963 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.963 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.963 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.963 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.963 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.963 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.963 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.963 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:32.963 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.963 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.963 "name": "raid_bdev1", 00:16:32.963 "uuid": "bd82aef4-bf02-405a-8227-5299bc0a5a72", 00:16:32.963 "strip_size_kb": 0, 00:16:32.963 "state": "online", 00:16:32.963 "raid_level": "raid1", 00:16:32.963 "superblock": true, 00:16:32.963 "num_base_bdevs": 2, 00:16:32.963 "num_base_bdevs_discovered": 1, 00:16:32.963 "num_base_bdevs_operational": 1, 00:16:32.963 "base_bdevs_list": [ 00:16:32.963 { 00:16:32.963 "name": null, 00:16:32.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.963 "is_configured": false, 00:16:32.963 "data_offset": 0, 00:16:32.963 "data_size": 7936 00:16:32.963 }, 00:16:32.963 { 00:16:32.963 "name": "BaseBdev2", 00:16:32.963 "uuid": "e2ddee3d-b9dd-52ed-bcfa-514067999703", 00:16:32.963 "is_configured": true, 00:16:32.963 "data_offset": 256, 00:16:32.963 "data_size": 7936 00:16:32.963 } 00:16:32.963 ] 00:16:32.963 }' 00:16:32.963 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.963 04:32:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:33.533 04:32:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:33.533 04:32:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.533 04:32:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:33.533 [2024-12-13 04:32:33.342729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:33.533 [2024-12-13 04:32:33.347242] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019c960 00:16:33.533 04:32:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.533 04:32:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:33.533 [2024-12-13 04:32:33.349490] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:34.473 04:32:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:34.473 04:32:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:34.473 04:32:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:34.473 04:32:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:34.473 04:32:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:34.473 04:32:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.473 04:32:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.473 04:32:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.473 04:32:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.473 04:32:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.473 04:32:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:34.473 "name": "raid_bdev1", 00:16:34.473 "uuid": "bd82aef4-bf02-405a-8227-5299bc0a5a72", 00:16:34.473 "strip_size_kb": 0, 00:16:34.473 "state": "online", 00:16:34.473 "raid_level": "raid1", 00:16:34.473 "superblock": true, 00:16:34.473 "num_base_bdevs": 2, 00:16:34.473 "num_base_bdevs_discovered": 2, 00:16:34.473 "num_base_bdevs_operational": 2, 00:16:34.473 "process": { 00:16:34.473 "type": "rebuild", 00:16:34.473 "target": "spare", 00:16:34.473 "progress": { 00:16:34.473 "blocks": 2560, 00:16:34.473 "percent": 32 00:16:34.473 } 00:16:34.473 }, 00:16:34.473 "base_bdevs_list": [ 00:16:34.473 { 00:16:34.473 "name": "spare", 00:16:34.473 "uuid": "d1019234-bab4-5544-8a93-82645e8805b9", 00:16:34.473 "is_configured": true, 00:16:34.473 "data_offset": 256, 00:16:34.473 "data_size": 7936 00:16:34.473 }, 00:16:34.473 { 00:16:34.473 "name": "BaseBdev2", 00:16:34.473 "uuid": "e2ddee3d-b9dd-52ed-bcfa-514067999703", 00:16:34.473 "is_configured": true, 00:16:34.473 "data_offset": 256, 00:16:34.473 "data_size": 7936 00:16:34.473 } 00:16:34.473 ] 00:16:34.473 }' 00:16:34.473 04:32:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:34.473 04:32:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:34.473 04:32:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:34.733 04:32:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:34.733 04:32:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:34.733 04:32:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.733 04:32:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.733 [2024-12-13 04:32:34.510141] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:34.733 [2024-12-13 04:32:34.558045] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:34.733 [2024-12-13 04:32:34.558115] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:34.733 [2024-12-13 04:32:34.558135] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:34.734 [2024-12-13 04:32:34.558143] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:34.734 04:32:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.734 04:32:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:34.734 04:32:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:34.734 04:32:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:34.734 04:32:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:34.734 04:32:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:34.734 04:32:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:34.734 04:32:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.734 04:32:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.734 04:32:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.734 04:32:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.734 04:32:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.734 04:32:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.734 04:32:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.734 04:32:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.734 04:32:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.734 04:32:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.734 "name": "raid_bdev1", 00:16:34.734 "uuid": "bd82aef4-bf02-405a-8227-5299bc0a5a72", 00:16:34.734 "strip_size_kb": 0, 00:16:34.734 "state": "online", 00:16:34.734 "raid_level": "raid1", 00:16:34.734 "superblock": true, 00:16:34.734 "num_base_bdevs": 2, 00:16:34.734 "num_base_bdevs_discovered": 1, 00:16:34.734 "num_base_bdevs_operational": 1, 00:16:34.734 "base_bdevs_list": [ 00:16:34.734 { 00:16:34.734 "name": null, 00:16:34.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.734 "is_configured": false, 00:16:34.734 "data_offset": 0, 00:16:34.734 "data_size": 7936 00:16:34.734 }, 00:16:34.734 { 00:16:34.734 "name": "BaseBdev2", 00:16:34.734 "uuid": "e2ddee3d-b9dd-52ed-bcfa-514067999703", 00:16:34.734 "is_configured": true, 00:16:34.734 "data_offset": 256, 00:16:34.734 "data_size": 7936 00:16:34.734 } 00:16:34.734 ] 00:16:34.734 }' 00:16:34.734 04:32:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.734 04:32:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.994 04:32:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:34.994 04:32:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:34.994 04:32:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:34.994 04:32:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:35.264 04:32:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.264 04:32:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.264 04:32:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.264 04:32:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:35.264 04:32:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.264 04:32:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.264 04:32:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.264 "name": "raid_bdev1", 00:16:35.264 "uuid": "bd82aef4-bf02-405a-8227-5299bc0a5a72", 00:16:35.264 "strip_size_kb": 0, 00:16:35.264 "state": "online", 00:16:35.264 "raid_level": "raid1", 00:16:35.264 "superblock": true, 00:16:35.264 "num_base_bdevs": 2, 00:16:35.264 "num_base_bdevs_discovered": 1, 00:16:35.264 "num_base_bdevs_operational": 1, 00:16:35.264 "base_bdevs_list": [ 00:16:35.264 { 00:16:35.264 "name": null, 00:16:35.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.264 "is_configured": false, 00:16:35.264 "data_offset": 0, 00:16:35.264 "data_size": 7936 00:16:35.264 }, 00:16:35.264 { 00:16:35.264 "name": "BaseBdev2", 00:16:35.264 "uuid": "e2ddee3d-b9dd-52ed-bcfa-514067999703", 00:16:35.264 "is_configured": true, 00:16:35.264 "data_offset": 256, 00:16:35.264 "data_size": 7936 00:16:35.264 } 00:16:35.264 ] 00:16:35.264 }' 00:16:35.264 04:32:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.264 04:32:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:35.264 04:32:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.264 04:32:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:35.264 04:32:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:35.264 04:32:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.265 04:32:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:35.265 [2024-12-13 04:32:35.170768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:35.265 [2024-12-13 04:32:35.174129] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019ca30 00:16:35.265 [2024-12-13 04:32:35.176279] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:35.265 04:32:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.265 04:32:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:36.206 04:32:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:36.206 04:32:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:36.206 04:32:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:36.206 04:32:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:36.206 04:32:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:36.206 04:32:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.206 04:32:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.206 04:32:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.206 04:32:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:36.206 04:32:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.466 04:32:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:36.466 "name": "raid_bdev1", 00:16:36.466 "uuid": "bd82aef4-bf02-405a-8227-5299bc0a5a72", 00:16:36.466 "strip_size_kb": 0, 00:16:36.466 "state": "online", 00:16:36.466 "raid_level": "raid1", 00:16:36.466 "superblock": true, 00:16:36.466 "num_base_bdevs": 2, 00:16:36.466 "num_base_bdevs_discovered": 2, 00:16:36.466 "num_base_bdevs_operational": 2, 00:16:36.466 "process": { 00:16:36.466 "type": "rebuild", 00:16:36.466 "target": "spare", 00:16:36.466 "progress": { 00:16:36.466 "blocks": 2560, 00:16:36.466 "percent": 32 00:16:36.466 } 00:16:36.466 }, 00:16:36.466 "base_bdevs_list": [ 00:16:36.466 { 00:16:36.466 "name": "spare", 00:16:36.466 "uuid": "d1019234-bab4-5544-8a93-82645e8805b9", 00:16:36.466 "is_configured": true, 00:16:36.466 "data_offset": 256, 00:16:36.466 "data_size": 7936 00:16:36.466 }, 00:16:36.466 { 00:16:36.466 "name": "BaseBdev2", 00:16:36.466 "uuid": "e2ddee3d-b9dd-52ed-bcfa-514067999703", 00:16:36.466 "is_configured": true, 00:16:36.466 "data_offset": 256, 00:16:36.466 "data_size": 7936 00:16:36.466 } 00:16:36.466 ] 00:16:36.466 }' 00:16:36.466 04:32:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:36.466 04:32:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:36.466 04:32:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:36.466 04:32:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:36.466 04:32:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:36.466 04:32:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:36.466 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:36.466 04:32:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:36.466 04:32:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:36.466 04:32:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:36.466 04:32:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=605 00:16:36.466 04:32:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:36.466 04:32:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:36.466 04:32:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:36.466 04:32:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:36.466 04:32:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:36.466 04:32:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:36.466 04:32:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.466 04:32:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.466 04:32:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:36.466 04:32:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.466 04:32:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.466 04:32:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:36.466 "name": "raid_bdev1", 00:16:36.466 "uuid": "bd82aef4-bf02-405a-8227-5299bc0a5a72", 00:16:36.466 "strip_size_kb": 0, 00:16:36.466 "state": "online", 00:16:36.466 "raid_level": "raid1", 00:16:36.466 "superblock": true, 00:16:36.466 "num_base_bdevs": 2, 00:16:36.466 "num_base_bdevs_discovered": 2, 00:16:36.466 "num_base_bdevs_operational": 2, 00:16:36.466 "process": { 00:16:36.466 "type": "rebuild", 00:16:36.466 "target": "spare", 00:16:36.466 "progress": { 00:16:36.466 "blocks": 2816, 00:16:36.466 "percent": 35 00:16:36.466 } 00:16:36.466 }, 00:16:36.466 "base_bdevs_list": [ 00:16:36.466 { 00:16:36.466 "name": "spare", 00:16:36.466 "uuid": "d1019234-bab4-5544-8a93-82645e8805b9", 00:16:36.466 "is_configured": true, 00:16:36.466 "data_offset": 256, 00:16:36.466 "data_size": 7936 00:16:36.466 }, 00:16:36.466 { 00:16:36.466 "name": "BaseBdev2", 00:16:36.466 "uuid": "e2ddee3d-b9dd-52ed-bcfa-514067999703", 00:16:36.466 "is_configured": true, 00:16:36.466 "data_offset": 256, 00:16:36.466 "data_size": 7936 00:16:36.466 } 00:16:36.466 ] 00:16:36.466 }' 00:16:36.466 04:32:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:36.466 04:32:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:36.466 04:32:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:36.466 04:32:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:36.466 04:32:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:37.848 04:32:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:37.848 04:32:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:37.848 04:32:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:37.848 04:32:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:37.848 04:32:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:37.848 04:32:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:37.848 04:32:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.848 04:32:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.848 04:32:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.848 04:32:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:37.848 04:32:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.848 04:32:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:37.848 "name": "raid_bdev1", 00:16:37.848 "uuid": "bd82aef4-bf02-405a-8227-5299bc0a5a72", 00:16:37.848 "strip_size_kb": 0, 00:16:37.848 "state": "online", 00:16:37.848 "raid_level": "raid1", 00:16:37.848 "superblock": true, 00:16:37.848 "num_base_bdevs": 2, 00:16:37.848 "num_base_bdevs_discovered": 2, 00:16:37.848 "num_base_bdevs_operational": 2, 00:16:37.848 "process": { 00:16:37.848 "type": "rebuild", 00:16:37.848 "target": "spare", 00:16:37.848 "progress": { 00:16:37.848 "blocks": 5888, 00:16:37.848 "percent": 74 00:16:37.848 } 00:16:37.848 }, 00:16:37.848 "base_bdevs_list": [ 00:16:37.848 { 00:16:37.848 "name": "spare", 00:16:37.848 "uuid": "d1019234-bab4-5544-8a93-82645e8805b9", 00:16:37.848 "is_configured": true, 00:16:37.848 "data_offset": 256, 00:16:37.848 "data_size": 7936 00:16:37.848 }, 00:16:37.848 { 00:16:37.848 "name": "BaseBdev2", 00:16:37.848 "uuid": "e2ddee3d-b9dd-52ed-bcfa-514067999703", 00:16:37.848 "is_configured": true, 00:16:37.849 "data_offset": 256, 00:16:37.849 "data_size": 7936 00:16:37.849 } 00:16:37.849 ] 00:16:37.849 }' 00:16:37.849 04:32:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:37.849 04:32:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:37.849 04:32:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.849 04:32:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:37.849 04:32:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:38.418 [2024-12-13 04:32:38.296387] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:38.418 [2024-12-13 04:32:38.296495] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:38.418 [2024-12-13 04:32:38.296609] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.678 04:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:38.678 04:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:38.678 04:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:38.678 04:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:38.678 04:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:38.678 04:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:38.678 04:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.678 04:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.678 04:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.678 04:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:38.678 04:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.938 04:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:38.938 "name": "raid_bdev1", 00:16:38.938 "uuid": "bd82aef4-bf02-405a-8227-5299bc0a5a72", 00:16:38.938 "strip_size_kb": 0, 00:16:38.938 "state": "online", 00:16:38.938 "raid_level": "raid1", 00:16:38.938 "superblock": true, 00:16:38.938 "num_base_bdevs": 2, 00:16:38.938 "num_base_bdevs_discovered": 2, 00:16:38.938 "num_base_bdevs_operational": 2, 00:16:38.938 "base_bdevs_list": [ 00:16:38.938 { 00:16:38.938 "name": "spare", 00:16:38.938 "uuid": "d1019234-bab4-5544-8a93-82645e8805b9", 00:16:38.938 "is_configured": true, 00:16:38.938 "data_offset": 256, 00:16:38.938 "data_size": 7936 00:16:38.938 }, 00:16:38.938 { 00:16:38.938 "name": "BaseBdev2", 00:16:38.938 "uuid": "e2ddee3d-b9dd-52ed-bcfa-514067999703", 00:16:38.938 "is_configured": true, 00:16:38.938 "data_offset": 256, 00:16:38.938 "data_size": 7936 00:16:38.938 } 00:16:38.938 ] 00:16:38.938 }' 00:16:38.938 04:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:38.938 04:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:38.938 04:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:38.938 04:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:38.938 04:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:16:38.938 04:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:38.938 04:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:38.938 04:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:38.938 04:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:38.938 04:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:38.938 04:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.938 04:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.938 04:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.938 04:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:38.938 04:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.938 04:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:38.938 "name": "raid_bdev1", 00:16:38.938 "uuid": "bd82aef4-bf02-405a-8227-5299bc0a5a72", 00:16:38.938 "strip_size_kb": 0, 00:16:38.938 "state": "online", 00:16:38.938 "raid_level": "raid1", 00:16:38.938 "superblock": true, 00:16:38.938 "num_base_bdevs": 2, 00:16:38.938 "num_base_bdevs_discovered": 2, 00:16:38.938 "num_base_bdevs_operational": 2, 00:16:38.938 "base_bdevs_list": [ 00:16:38.938 { 00:16:38.938 "name": "spare", 00:16:38.938 "uuid": "d1019234-bab4-5544-8a93-82645e8805b9", 00:16:38.938 "is_configured": true, 00:16:38.938 "data_offset": 256, 00:16:38.938 "data_size": 7936 00:16:38.938 }, 00:16:38.938 { 00:16:38.938 "name": "BaseBdev2", 00:16:38.938 "uuid": "e2ddee3d-b9dd-52ed-bcfa-514067999703", 00:16:38.938 "is_configured": true, 00:16:38.938 "data_offset": 256, 00:16:38.938 "data_size": 7936 00:16:38.938 } 00:16:38.938 ] 00:16:38.938 }' 00:16:38.938 04:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:38.939 04:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:38.939 04:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:38.939 04:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:38.939 04:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:38.939 04:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.939 04:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.939 04:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:38.939 04:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:38.939 04:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:38.939 04:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.939 04:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.939 04:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.939 04:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.199 04:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.199 04:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.199 04:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:39.199 04:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.199 04:32:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.199 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.199 "name": "raid_bdev1", 00:16:39.199 "uuid": "bd82aef4-bf02-405a-8227-5299bc0a5a72", 00:16:39.199 "strip_size_kb": 0, 00:16:39.199 "state": "online", 00:16:39.199 "raid_level": "raid1", 00:16:39.199 "superblock": true, 00:16:39.199 "num_base_bdevs": 2, 00:16:39.199 "num_base_bdevs_discovered": 2, 00:16:39.199 "num_base_bdevs_operational": 2, 00:16:39.199 "base_bdevs_list": [ 00:16:39.199 { 00:16:39.199 "name": "spare", 00:16:39.199 "uuid": "d1019234-bab4-5544-8a93-82645e8805b9", 00:16:39.199 "is_configured": true, 00:16:39.199 "data_offset": 256, 00:16:39.199 "data_size": 7936 00:16:39.199 }, 00:16:39.199 { 00:16:39.199 "name": "BaseBdev2", 00:16:39.199 "uuid": "e2ddee3d-b9dd-52ed-bcfa-514067999703", 00:16:39.199 "is_configured": true, 00:16:39.199 "data_offset": 256, 00:16:39.199 "data_size": 7936 00:16:39.199 } 00:16:39.199 ] 00:16:39.199 }' 00:16:39.199 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.199 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:39.459 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:39.459 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.460 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:39.460 [2024-12-13 04:32:39.407680] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:39.460 [2024-12-13 04:32:39.407758] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:39.460 [2024-12-13 04:32:39.407882] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:39.460 [2024-12-13 04:32:39.408003] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:39.460 [2024-12-13 04:32:39.408052] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:16:39.460 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.460 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:16:39.460 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.460 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.460 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:39.460 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.460 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:39.460 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:39.460 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:39.460 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:39.460 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:39.460 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:39.460 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:39.460 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:39.460 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:39.460 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:16:39.460 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:39.460 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:39.460 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:39.720 /dev/nbd0 00:16:39.720 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:39.720 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:39.720 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:39.720 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:16:39.720 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:39.720 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:39.720 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:39.720 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:16:39.720 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:39.720 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:39.720 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:39.720 1+0 records in 00:16:39.720 1+0 records out 00:16:39.720 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000400273 s, 10.2 MB/s 00:16:39.720 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:39.720 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:16:39.720 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:39.720 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:39.720 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:16:39.720 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:39.720 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:39.720 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:39.981 /dev/nbd1 00:16:39.981 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:39.981 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:39.981 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:39.981 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:16:39.981 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:39.981 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:39.981 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:39.981 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:16:39.981 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:39.981 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:39.981 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:39.981 1+0 records in 00:16:39.981 1+0 records out 00:16:39.981 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000435055 s, 9.4 MB/s 00:16:39.981 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:39.981 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:16:39.981 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:39.981 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:39.981 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:16:39.981 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:39.981 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:39.981 04:32:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:40.241 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:40.241 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:40.241 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:40.242 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:40.242 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:16:40.242 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:40.242 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:40.502 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:40.502 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:40.502 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:40.502 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:40.502 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:40.502 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:40.502 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:16:40.502 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:16:40.502 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:40.502 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:40.502 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:40.502 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:40.502 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:40.502 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:40.502 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:40.502 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:40.502 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:16:40.502 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:16:40.502 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:40.502 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:40.502 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.502 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:40.502 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.502 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:40.502 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.502 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:40.502 [2024-12-13 04:32:40.496390] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:40.502 [2024-12-13 04:32:40.496472] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.502 [2024-12-13 04:32:40.496495] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:40.502 [2024-12-13 04:32:40.496510] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.502 [2024-12-13 04:32:40.498845] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.502 [2024-12-13 04:32:40.498883] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:40.502 [2024-12-13 04:32:40.498941] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:40.502 [2024-12-13 04:32:40.498987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:40.502 [2024-12-13 04:32:40.499108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:40.502 spare 00:16:40.502 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.502 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:40.502 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.502 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:40.763 [2024-12-13 04:32:40.599003] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:16:40.763 [2024-12-13 04:32:40.599028] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:40.763 [2024-12-13 04:32:40.599127] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb1b0 00:16:40.763 [2024-12-13 04:32:40.599244] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:16:40.763 [2024-12-13 04:32:40.599258] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:16:40.763 [2024-12-13 04:32:40.599345] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:40.763 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.763 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:40.763 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:40.763 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:40.763 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:40.763 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:40.763 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:40.763 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.763 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.763 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.763 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.763 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.763 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.763 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.763 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:40.763 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.763 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.763 "name": "raid_bdev1", 00:16:40.763 "uuid": "bd82aef4-bf02-405a-8227-5299bc0a5a72", 00:16:40.763 "strip_size_kb": 0, 00:16:40.763 "state": "online", 00:16:40.763 "raid_level": "raid1", 00:16:40.763 "superblock": true, 00:16:40.763 "num_base_bdevs": 2, 00:16:40.763 "num_base_bdevs_discovered": 2, 00:16:40.763 "num_base_bdevs_operational": 2, 00:16:40.763 "base_bdevs_list": [ 00:16:40.763 { 00:16:40.763 "name": "spare", 00:16:40.763 "uuid": "d1019234-bab4-5544-8a93-82645e8805b9", 00:16:40.763 "is_configured": true, 00:16:40.763 "data_offset": 256, 00:16:40.763 "data_size": 7936 00:16:40.763 }, 00:16:40.763 { 00:16:40.763 "name": "BaseBdev2", 00:16:40.763 "uuid": "e2ddee3d-b9dd-52ed-bcfa-514067999703", 00:16:40.763 "is_configured": true, 00:16:40.763 "data_offset": 256, 00:16:40.763 "data_size": 7936 00:16:40.763 } 00:16:40.763 ] 00:16:40.763 }' 00:16:40.763 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.763 04:32:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:41.023 04:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:41.023 04:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:41.023 04:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:41.023 04:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:41.023 04:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:41.023 04:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.023 04:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.023 04:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.023 04:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:41.283 04:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.283 04:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:41.283 "name": "raid_bdev1", 00:16:41.283 "uuid": "bd82aef4-bf02-405a-8227-5299bc0a5a72", 00:16:41.283 "strip_size_kb": 0, 00:16:41.283 "state": "online", 00:16:41.283 "raid_level": "raid1", 00:16:41.283 "superblock": true, 00:16:41.283 "num_base_bdevs": 2, 00:16:41.283 "num_base_bdevs_discovered": 2, 00:16:41.283 "num_base_bdevs_operational": 2, 00:16:41.283 "base_bdevs_list": [ 00:16:41.283 { 00:16:41.283 "name": "spare", 00:16:41.283 "uuid": "d1019234-bab4-5544-8a93-82645e8805b9", 00:16:41.283 "is_configured": true, 00:16:41.283 "data_offset": 256, 00:16:41.283 "data_size": 7936 00:16:41.283 }, 00:16:41.283 { 00:16:41.283 "name": "BaseBdev2", 00:16:41.283 "uuid": "e2ddee3d-b9dd-52ed-bcfa-514067999703", 00:16:41.283 "is_configured": true, 00:16:41.283 "data_offset": 256, 00:16:41.283 "data_size": 7936 00:16:41.283 } 00:16:41.283 ] 00:16:41.283 }' 00:16:41.283 04:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:41.283 04:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:41.283 04:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:41.283 04:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:41.283 04:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.283 04:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.283 04:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:41.283 04:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:41.283 04:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.283 04:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:41.283 04:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:41.283 04:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.283 04:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:41.283 [2024-12-13 04:32:41.219288] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:41.283 04:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.283 04:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:41.283 04:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:41.283 04:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:41.283 04:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:41.284 04:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:41.284 04:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:41.284 04:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.284 04:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.284 04:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.284 04:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.284 04:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.284 04:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.284 04:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.284 04:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:41.284 04:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.284 04:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.284 "name": "raid_bdev1", 00:16:41.284 "uuid": "bd82aef4-bf02-405a-8227-5299bc0a5a72", 00:16:41.284 "strip_size_kb": 0, 00:16:41.284 "state": "online", 00:16:41.284 "raid_level": "raid1", 00:16:41.284 "superblock": true, 00:16:41.284 "num_base_bdevs": 2, 00:16:41.284 "num_base_bdevs_discovered": 1, 00:16:41.284 "num_base_bdevs_operational": 1, 00:16:41.284 "base_bdevs_list": [ 00:16:41.284 { 00:16:41.284 "name": null, 00:16:41.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.284 "is_configured": false, 00:16:41.284 "data_offset": 0, 00:16:41.284 "data_size": 7936 00:16:41.284 }, 00:16:41.284 { 00:16:41.284 "name": "BaseBdev2", 00:16:41.284 "uuid": "e2ddee3d-b9dd-52ed-bcfa-514067999703", 00:16:41.284 "is_configured": true, 00:16:41.284 "data_offset": 256, 00:16:41.284 "data_size": 7936 00:16:41.284 } 00:16:41.284 ] 00:16:41.284 }' 00:16:41.284 04:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.284 04:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:41.852 04:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:41.852 04:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.852 04:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:41.852 [2024-12-13 04:32:41.658527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:41.852 [2024-12-13 04:32:41.658749] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:41.852 [2024-12-13 04:32:41.658825] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:41.852 [2024-12-13 04:32:41.658883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:41.852 [2024-12-13 04:32:41.663069] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb280 00:16:41.852 04:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.852 04:32:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:41.852 [2024-12-13 04:32:41.665295] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:42.792 04:32:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:42.792 04:32:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.792 04:32:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:42.792 04:32:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:42.792 04:32:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.792 04:32:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.792 04:32:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.792 04:32:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.792 04:32:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:42.792 04:32:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.792 04:32:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:42.792 "name": "raid_bdev1", 00:16:42.792 "uuid": "bd82aef4-bf02-405a-8227-5299bc0a5a72", 00:16:42.792 "strip_size_kb": 0, 00:16:42.792 "state": "online", 00:16:42.792 "raid_level": "raid1", 00:16:42.792 "superblock": true, 00:16:42.792 "num_base_bdevs": 2, 00:16:42.792 "num_base_bdevs_discovered": 2, 00:16:42.792 "num_base_bdevs_operational": 2, 00:16:42.792 "process": { 00:16:42.792 "type": "rebuild", 00:16:42.792 "target": "spare", 00:16:42.792 "progress": { 00:16:42.792 "blocks": 2560, 00:16:42.792 "percent": 32 00:16:42.792 } 00:16:42.792 }, 00:16:42.792 "base_bdevs_list": [ 00:16:42.792 { 00:16:42.792 "name": "spare", 00:16:42.792 "uuid": "d1019234-bab4-5544-8a93-82645e8805b9", 00:16:42.792 "is_configured": true, 00:16:42.792 "data_offset": 256, 00:16:42.792 "data_size": 7936 00:16:42.792 }, 00:16:42.792 { 00:16:42.792 "name": "BaseBdev2", 00:16:42.792 "uuid": "e2ddee3d-b9dd-52ed-bcfa-514067999703", 00:16:42.792 "is_configured": true, 00:16:42.792 "data_offset": 256, 00:16:42.792 "data_size": 7936 00:16:42.792 } 00:16:42.792 ] 00:16:42.792 }' 00:16:42.792 04:32:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:42.792 04:32:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:42.792 04:32:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:43.054 04:32:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:43.054 04:32:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:43.054 04:32:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.054 04:32:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:43.054 [2024-12-13 04:32:42.822803] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:43.054 [2024-12-13 04:32:42.872981] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:43.054 [2024-12-13 04:32:42.873035] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:43.054 [2024-12-13 04:32:42.873053] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:43.054 [2024-12-13 04:32:42.873060] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:43.054 04:32:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.054 04:32:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:43.054 04:32:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:43.054 04:32:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:43.054 04:32:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:43.054 04:32:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:43.054 04:32:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:43.054 04:32:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.054 04:32:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.054 04:32:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.054 04:32:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.054 04:32:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.054 04:32:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.054 04:32:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.054 04:32:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:43.054 04:32:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.054 04:32:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.054 "name": "raid_bdev1", 00:16:43.054 "uuid": "bd82aef4-bf02-405a-8227-5299bc0a5a72", 00:16:43.054 "strip_size_kb": 0, 00:16:43.054 "state": "online", 00:16:43.054 "raid_level": "raid1", 00:16:43.054 "superblock": true, 00:16:43.054 "num_base_bdevs": 2, 00:16:43.054 "num_base_bdevs_discovered": 1, 00:16:43.054 "num_base_bdevs_operational": 1, 00:16:43.054 "base_bdevs_list": [ 00:16:43.054 { 00:16:43.054 "name": null, 00:16:43.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.054 "is_configured": false, 00:16:43.054 "data_offset": 0, 00:16:43.054 "data_size": 7936 00:16:43.054 }, 00:16:43.054 { 00:16:43.054 "name": "BaseBdev2", 00:16:43.054 "uuid": "e2ddee3d-b9dd-52ed-bcfa-514067999703", 00:16:43.054 "is_configured": true, 00:16:43.054 "data_offset": 256, 00:16:43.054 "data_size": 7936 00:16:43.054 } 00:16:43.054 ] 00:16:43.054 }' 00:16:43.054 04:32:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.054 04:32:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:43.624 04:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:43.624 04:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.624 04:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:43.624 [2024-12-13 04:32:43.349286] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:43.624 [2024-12-13 04:32:43.349395] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.624 [2024-12-13 04:32:43.349458] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:43.624 [2024-12-13 04:32:43.349508] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.624 [2024-12-13 04:32:43.349754] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.624 [2024-12-13 04:32:43.349801] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:43.624 [2024-12-13 04:32:43.349882] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:43.624 [2024-12-13 04:32:43.349917] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:43.624 [2024-12-13 04:32:43.349960] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:43.624 [2024-12-13 04:32:43.350039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:43.624 [2024-12-13 04:32:43.353006] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb350 00:16:43.624 [2024-12-13 04:32:43.355213] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:43.624 spare 00:16:43.624 04:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.624 04:32:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:44.564 04:32:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:44.564 04:32:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:44.564 04:32:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:44.564 04:32:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:44.564 04:32:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:44.564 04:32:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.564 04:32:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.564 04:32:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.564 04:32:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:44.564 04:32:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.564 04:32:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:44.564 "name": "raid_bdev1", 00:16:44.564 "uuid": "bd82aef4-bf02-405a-8227-5299bc0a5a72", 00:16:44.564 "strip_size_kb": 0, 00:16:44.564 "state": "online", 00:16:44.564 "raid_level": "raid1", 00:16:44.564 "superblock": true, 00:16:44.564 "num_base_bdevs": 2, 00:16:44.564 "num_base_bdevs_discovered": 2, 00:16:44.564 "num_base_bdevs_operational": 2, 00:16:44.564 "process": { 00:16:44.564 "type": "rebuild", 00:16:44.564 "target": "spare", 00:16:44.564 "progress": { 00:16:44.564 "blocks": 2560, 00:16:44.564 "percent": 32 00:16:44.564 } 00:16:44.564 }, 00:16:44.564 "base_bdevs_list": [ 00:16:44.564 { 00:16:44.564 "name": "spare", 00:16:44.564 "uuid": "d1019234-bab4-5544-8a93-82645e8805b9", 00:16:44.564 "is_configured": true, 00:16:44.564 "data_offset": 256, 00:16:44.564 "data_size": 7936 00:16:44.564 }, 00:16:44.564 { 00:16:44.564 "name": "BaseBdev2", 00:16:44.564 "uuid": "e2ddee3d-b9dd-52ed-bcfa-514067999703", 00:16:44.564 "is_configured": true, 00:16:44.564 "data_offset": 256, 00:16:44.564 "data_size": 7936 00:16:44.564 } 00:16:44.564 ] 00:16:44.564 }' 00:16:44.564 04:32:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:44.564 04:32:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:44.564 04:32:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:44.564 04:32:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:44.564 04:32:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:44.564 04:32:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.564 04:32:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:44.564 [2024-12-13 04:32:44.518988] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:44.564 [2024-12-13 04:32:44.562827] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:44.564 [2024-12-13 04:32:44.562940] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:44.564 [2024-12-13 04:32:44.562976] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:44.564 [2024-12-13 04:32:44.563016] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:44.564 04:32:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.564 04:32:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:44.564 04:32:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:44.564 04:32:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:44.564 04:32:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:44.564 04:32:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:44.564 04:32:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:44.564 04:32:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.564 04:32:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.564 04:32:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.564 04:32:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.824 04:32:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.824 04:32:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.824 04:32:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.824 04:32:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:44.824 04:32:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.824 04:32:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.824 "name": "raid_bdev1", 00:16:44.824 "uuid": "bd82aef4-bf02-405a-8227-5299bc0a5a72", 00:16:44.824 "strip_size_kb": 0, 00:16:44.824 "state": "online", 00:16:44.824 "raid_level": "raid1", 00:16:44.824 "superblock": true, 00:16:44.824 "num_base_bdevs": 2, 00:16:44.824 "num_base_bdevs_discovered": 1, 00:16:44.824 "num_base_bdevs_operational": 1, 00:16:44.824 "base_bdevs_list": [ 00:16:44.824 { 00:16:44.824 "name": null, 00:16:44.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.824 "is_configured": false, 00:16:44.824 "data_offset": 0, 00:16:44.824 "data_size": 7936 00:16:44.824 }, 00:16:44.824 { 00:16:44.824 "name": "BaseBdev2", 00:16:44.824 "uuid": "e2ddee3d-b9dd-52ed-bcfa-514067999703", 00:16:44.824 "is_configured": true, 00:16:44.824 "data_offset": 256, 00:16:44.824 "data_size": 7936 00:16:44.824 } 00:16:44.824 ] 00:16:44.824 }' 00:16:44.824 04:32:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.824 04:32:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:45.084 04:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:45.084 04:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:45.084 04:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:45.084 04:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:45.084 04:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:45.084 04:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.084 04:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.084 04:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.084 04:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:45.084 04:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.084 04:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:45.084 "name": "raid_bdev1", 00:16:45.084 "uuid": "bd82aef4-bf02-405a-8227-5299bc0a5a72", 00:16:45.084 "strip_size_kb": 0, 00:16:45.084 "state": "online", 00:16:45.084 "raid_level": "raid1", 00:16:45.084 "superblock": true, 00:16:45.084 "num_base_bdevs": 2, 00:16:45.084 "num_base_bdevs_discovered": 1, 00:16:45.084 "num_base_bdevs_operational": 1, 00:16:45.084 "base_bdevs_list": [ 00:16:45.084 { 00:16:45.084 "name": null, 00:16:45.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.084 "is_configured": false, 00:16:45.084 "data_offset": 0, 00:16:45.084 "data_size": 7936 00:16:45.084 }, 00:16:45.084 { 00:16:45.084 "name": "BaseBdev2", 00:16:45.084 "uuid": "e2ddee3d-b9dd-52ed-bcfa-514067999703", 00:16:45.084 "is_configured": true, 00:16:45.084 "data_offset": 256, 00:16:45.084 "data_size": 7936 00:16:45.084 } 00:16:45.084 ] 00:16:45.084 }' 00:16:45.084 04:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:45.344 04:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:45.344 04:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:45.344 04:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:45.344 04:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:45.344 04:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.344 04:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:45.344 04:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.344 04:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:45.344 04:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.344 04:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:45.344 [2024-12-13 04:32:45.171022] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:45.344 [2024-12-13 04:32:45.171076] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.344 [2024-12-13 04:32:45.171095] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:16:45.344 [2024-12-13 04:32:45.171106] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.344 [2024-12-13 04:32:45.171342] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.344 [2024-12-13 04:32:45.171359] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:45.344 [2024-12-13 04:32:45.171412] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:45.344 [2024-12-13 04:32:45.171430] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:45.344 [2024-12-13 04:32:45.171451] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:45.344 [2024-12-13 04:32:45.171466] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:45.344 BaseBdev1 00:16:45.344 04:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.344 04:32:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:46.284 04:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:46.284 04:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:46.284 04:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:46.284 04:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:46.284 04:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:46.284 04:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:46.285 04:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.285 04:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.285 04:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.285 04:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.285 04:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.285 04:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.285 04:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.285 04:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:46.285 04:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.285 04:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.285 "name": "raid_bdev1", 00:16:46.285 "uuid": "bd82aef4-bf02-405a-8227-5299bc0a5a72", 00:16:46.285 "strip_size_kb": 0, 00:16:46.285 "state": "online", 00:16:46.285 "raid_level": "raid1", 00:16:46.285 "superblock": true, 00:16:46.285 "num_base_bdevs": 2, 00:16:46.285 "num_base_bdevs_discovered": 1, 00:16:46.285 "num_base_bdevs_operational": 1, 00:16:46.285 "base_bdevs_list": [ 00:16:46.285 { 00:16:46.285 "name": null, 00:16:46.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.285 "is_configured": false, 00:16:46.285 "data_offset": 0, 00:16:46.285 "data_size": 7936 00:16:46.285 }, 00:16:46.285 { 00:16:46.285 "name": "BaseBdev2", 00:16:46.285 "uuid": "e2ddee3d-b9dd-52ed-bcfa-514067999703", 00:16:46.285 "is_configured": true, 00:16:46.285 "data_offset": 256, 00:16:46.285 "data_size": 7936 00:16:46.285 } 00:16:46.285 ] 00:16:46.285 }' 00:16:46.285 04:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.285 04:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:46.855 04:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:46.855 04:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:46.855 04:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:46.855 04:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:46.855 04:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:46.855 04:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.855 04:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.855 04:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.855 04:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:46.855 04:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.855 04:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:46.855 "name": "raid_bdev1", 00:16:46.855 "uuid": "bd82aef4-bf02-405a-8227-5299bc0a5a72", 00:16:46.855 "strip_size_kb": 0, 00:16:46.855 "state": "online", 00:16:46.855 "raid_level": "raid1", 00:16:46.855 "superblock": true, 00:16:46.855 "num_base_bdevs": 2, 00:16:46.855 "num_base_bdevs_discovered": 1, 00:16:46.855 "num_base_bdevs_operational": 1, 00:16:46.855 "base_bdevs_list": [ 00:16:46.855 { 00:16:46.855 "name": null, 00:16:46.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.855 "is_configured": false, 00:16:46.855 "data_offset": 0, 00:16:46.855 "data_size": 7936 00:16:46.855 }, 00:16:46.855 { 00:16:46.855 "name": "BaseBdev2", 00:16:46.855 "uuid": "e2ddee3d-b9dd-52ed-bcfa-514067999703", 00:16:46.855 "is_configured": true, 00:16:46.855 "data_offset": 256, 00:16:46.855 "data_size": 7936 00:16:46.855 } 00:16:46.855 ] 00:16:46.855 }' 00:16:46.855 04:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:46.855 04:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:46.855 04:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:46.855 04:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:46.855 04:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:46.855 04:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:16:46.855 04:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:46.855 04:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:46.855 04:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:46.855 04:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:46.855 04:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:46.855 04:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:46.855 04:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.855 04:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:46.855 [2024-12-13 04:32:46.788223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:46.855 [2024-12-13 04:32:46.788443] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:46.855 [2024-12-13 04:32:46.788527] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:46.855 request: 00:16:46.855 { 00:16:46.855 "base_bdev": "BaseBdev1", 00:16:46.855 "raid_bdev": "raid_bdev1", 00:16:46.855 "method": "bdev_raid_add_base_bdev", 00:16:46.855 "req_id": 1 00:16:46.855 } 00:16:46.855 Got JSON-RPC error response 00:16:46.855 response: 00:16:46.855 { 00:16:46.855 "code": -22, 00:16:46.855 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:46.855 } 00:16:46.855 04:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:46.855 04:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:16:46.855 04:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:46.855 04:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:46.855 04:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:46.855 04:32:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:47.795 04:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:47.795 04:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:47.795 04:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:47.795 04:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:47.795 04:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:47.795 04:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:47.795 04:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.795 04:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.795 04:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.795 04:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.054 04:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.055 04:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.055 04:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.055 04:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:48.055 04:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.055 04:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.055 "name": "raid_bdev1", 00:16:48.055 "uuid": "bd82aef4-bf02-405a-8227-5299bc0a5a72", 00:16:48.055 "strip_size_kb": 0, 00:16:48.055 "state": "online", 00:16:48.055 "raid_level": "raid1", 00:16:48.055 "superblock": true, 00:16:48.055 "num_base_bdevs": 2, 00:16:48.055 "num_base_bdevs_discovered": 1, 00:16:48.055 "num_base_bdevs_operational": 1, 00:16:48.055 "base_bdevs_list": [ 00:16:48.055 { 00:16:48.055 "name": null, 00:16:48.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.055 "is_configured": false, 00:16:48.055 "data_offset": 0, 00:16:48.055 "data_size": 7936 00:16:48.055 }, 00:16:48.055 { 00:16:48.055 "name": "BaseBdev2", 00:16:48.055 "uuid": "e2ddee3d-b9dd-52ed-bcfa-514067999703", 00:16:48.055 "is_configured": true, 00:16:48.055 "data_offset": 256, 00:16:48.055 "data_size": 7936 00:16:48.055 } 00:16:48.055 ] 00:16:48.055 }' 00:16:48.055 04:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.055 04:32:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:48.315 04:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:48.315 04:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:48.315 04:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:48.315 04:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:48.315 04:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:48.315 04:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.315 04:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.315 04:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.315 04:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:48.315 04:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.315 04:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:48.315 "name": "raid_bdev1", 00:16:48.315 "uuid": "bd82aef4-bf02-405a-8227-5299bc0a5a72", 00:16:48.315 "strip_size_kb": 0, 00:16:48.315 "state": "online", 00:16:48.315 "raid_level": "raid1", 00:16:48.315 "superblock": true, 00:16:48.315 "num_base_bdevs": 2, 00:16:48.315 "num_base_bdevs_discovered": 1, 00:16:48.315 "num_base_bdevs_operational": 1, 00:16:48.315 "base_bdevs_list": [ 00:16:48.315 { 00:16:48.315 "name": null, 00:16:48.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.315 "is_configured": false, 00:16:48.315 "data_offset": 0, 00:16:48.315 "data_size": 7936 00:16:48.315 }, 00:16:48.315 { 00:16:48.315 "name": "BaseBdev2", 00:16:48.315 "uuid": "e2ddee3d-b9dd-52ed-bcfa-514067999703", 00:16:48.315 "is_configured": true, 00:16:48.315 "data_offset": 256, 00:16:48.315 "data_size": 7936 00:16:48.315 } 00:16:48.315 ] 00:16:48.315 }' 00:16:48.315 04:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:48.575 04:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:48.575 04:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:48.575 04:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:48.575 04:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 99882 00:16:48.575 04:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 99882 ']' 00:16:48.575 04:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 99882 00:16:48.575 04:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:16:48.575 04:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:48.575 04:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99882 00:16:48.575 killing process with pid 99882 00:16:48.575 Received shutdown signal, test time was about 60.000000 seconds 00:16:48.575 00:16:48.575 Latency(us) 00:16:48.575 [2024-12-13T04:32:48.590Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:48.575 [2024-12-13T04:32:48.590Z] =================================================================================================================== 00:16:48.575 [2024-12-13T04:32:48.590Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:48.575 04:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:48.575 04:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:48.575 04:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99882' 00:16:48.575 04:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 99882 00:16:48.575 [2024-12-13 04:32:48.443237] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:48.575 [2024-12-13 04:32:48.443349] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:48.575 04:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 99882 00:16:48.575 [2024-12-13 04:32:48.443403] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:48.575 [2024-12-13 04:32:48.443413] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:16:48.575 [2024-12-13 04:32:48.504985] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:48.835 04:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:16:48.835 00:16:48.835 real 0m18.630s 00:16:48.835 user 0m24.610s 00:16:48.835 sys 0m2.782s 00:16:48.835 04:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:48.835 ************************************ 00:16:48.835 END TEST raid_rebuild_test_sb_md_separate 00:16:48.835 ************************************ 00:16:48.835 04:32:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:49.095 04:32:48 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:16:49.095 04:32:48 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:16:49.095 04:32:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:49.095 04:32:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:49.095 04:32:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:49.095 ************************************ 00:16:49.095 START TEST raid_state_function_test_sb_md_interleaved 00:16:49.095 ************************************ 00:16:49.095 04:32:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:16:49.095 04:32:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:49.095 04:32:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:49.095 04:32:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:49.095 04:32:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:49.095 04:32:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:49.095 04:32:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:49.095 04:32:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:49.095 04:32:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:49.095 04:32:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:49.095 04:32:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:49.095 04:32:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:49.095 04:32:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:49.095 04:32:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:49.095 04:32:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:49.095 04:32:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:49.095 04:32:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:49.095 04:32:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:49.095 04:32:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:49.095 04:32:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:49.095 04:32:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:49.095 04:32:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:49.095 04:32:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:49.095 04:32:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=100559 00:16:49.095 04:32:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:49.095 04:32:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 100559' 00:16:49.095 Process raid pid: 100559 00:16:49.095 04:32:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 100559 00:16:49.095 04:32:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 100559 ']' 00:16:49.095 04:32:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.095 04:32:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:49.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.096 04:32:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.096 04:32:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:49.096 04:32:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:49.096 [2024-12-13 04:32:48.996125] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:16:49.096 [2024-12-13 04:32:48.996303] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:49.355 [2024-12-13 04:32:49.153162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.355 [2024-12-13 04:32:49.191938] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.355 [2024-12-13 04:32:49.269208] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:49.355 [2024-12-13 04:32:49.269244] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:49.925 04:32:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:49.925 04:32:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:16:49.925 04:32:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:49.925 04:32:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.925 04:32:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:49.925 [2024-12-13 04:32:49.820351] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:49.925 [2024-12-13 04:32:49.820408] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:49.925 [2024-12-13 04:32:49.820418] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:49.925 [2024-12-13 04:32:49.820427] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:49.925 04:32:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.925 04:32:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:49.925 04:32:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:49.925 04:32:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:49.925 04:32:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:49.925 04:32:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:49.925 04:32:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:49.925 04:32:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.925 04:32:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.925 04:32:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.925 04:32:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.925 04:32:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.925 04:32:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.925 04:32:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.925 04:32:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:49.925 04:32:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.925 04:32:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.925 "name": "Existed_Raid", 00:16:49.925 "uuid": "674398e2-4f16-4b94-84b5-3385dcd435fa", 00:16:49.925 "strip_size_kb": 0, 00:16:49.925 "state": "configuring", 00:16:49.925 "raid_level": "raid1", 00:16:49.925 "superblock": true, 00:16:49.925 "num_base_bdevs": 2, 00:16:49.925 "num_base_bdevs_discovered": 0, 00:16:49.925 "num_base_bdevs_operational": 2, 00:16:49.925 "base_bdevs_list": [ 00:16:49.925 { 00:16:49.925 "name": "BaseBdev1", 00:16:49.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.925 "is_configured": false, 00:16:49.926 "data_offset": 0, 00:16:49.926 "data_size": 0 00:16:49.926 }, 00:16:49.926 { 00:16:49.926 "name": "BaseBdev2", 00:16:49.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.926 "is_configured": false, 00:16:49.926 "data_offset": 0, 00:16:49.926 "data_size": 0 00:16:49.926 } 00:16:49.926 ] 00:16:49.926 }' 00:16:49.926 04:32:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.926 04:32:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.495 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:50.495 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.495 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.495 [2024-12-13 04:32:50.223549] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:50.495 [2024-12-13 04:32:50.223654] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:16:50.495 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.495 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:50.495 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.495 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.495 [2024-12-13 04:32:50.235548] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:50.495 [2024-12-13 04:32:50.235637] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:50.495 [2024-12-13 04:32:50.235664] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:50.495 [2024-12-13 04:32:50.235699] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:50.495 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.495 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:16:50.495 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.495 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.495 [2024-12-13 04:32:50.262747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:50.495 BaseBdev1 00:16:50.495 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.495 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:50.495 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:50.496 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:50.496 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:16:50.496 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:50.496 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:50.496 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:50.496 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.496 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.496 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.496 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:50.496 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.496 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.496 [ 00:16:50.496 { 00:16:50.496 "name": "BaseBdev1", 00:16:50.496 "aliases": [ 00:16:50.496 "7c0e8e1c-475e-4460-8c54-5acdb172a7f5" 00:16:50.496 ], 00:16:50.496 "product_name": "Malloc disk", 00:16:50.496 "block_size": 4128, 00:16:50.496 "num_blocks": 8192, 00:16:50.496 "uuid": "7c0e8e1c-475e-4460-8c54-5acdb172a7f5", 00:16:50.496 "md_size": 32, 00:16:50.496 "md_interleave": true, 00:16:50.496 "dif_type": 0, 00:16:50.496 "assigned_rate_limits": { 00:16:50.496 "rw_ios_per_sec": 0, 00:16:50.496 "rw_mbytes_per_sec": 0, 00:16:50.496 "r_mbytes_per_sec": 0, 00:16:50.496 "w_mbytes_per_sec": 0 00:16:50.496 }, 00:16:50.496 "claimed": true, 00:16:50.496 "claim_type": "exclusive_write", 00:16:50.496 "zoned": false, 00:16:50.496 "supported_io_types": { 00:16:50.496 "read": true, 00:16:50.496 "write": true, 00:16:50.496 "unmap": true, 00:16:50.496 "flush": true, 00:16:50.496 "reset": true, 00:16:50.496 "nvme_admin": false, 00:16:50.496 "nvme_io": false, 00:16:50.496 "nvme_io_md": false, 00:16:50.496 "write_zeroes": true, 00:16:50.496 "zcopy": true, 00:16:50.496 "get_zone_info": false, 00:16:50.496 "zone_management": false, 00:16:50.496 "zone_append": false, 00:16:50.496 "compare": false, 00:16:50.496 "compare_and_write": false, 00:16:50.496 "abort": true, 00:16:50.496 "seek_hole": false, 00:16:50.496 "seek_data": false, 00:16:50.496 "copy": true, 00:16:50.496 "nvme_iov_md": false 00:16:50.496 }, 00:16:50.496 "memory_domains": [ 00:16:50.496 { 00:16:50.496 "dma_device_id": "system", 00:16:50.496 "dma_device_type": 1 00:16:50.496 }, 00:16:50.496 { 00:16:50.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.496 "dma_device_type": 2 00:16:50.496 } 00:16:50.496 ], 00:16:50.496 "driver_specific": {} 00:16:50.496 } 00:16:50.496 ] 00:16:50.496 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.496 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:16:50.496 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:50.496 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:50.496 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:50.496 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:50.496 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:50.496 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:50.496 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.496 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.496 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.496 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.496 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.496 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.496 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.496 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.496 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.496 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.496 "name": "Existed_Raid", 00:16:50.496 "uuid": "0959ae7d-9de6-47f8-9906-d47d76b0431d", 00:16:50.496 "strip_size_kb": 0, 00:16:50.496 "state": "configuring", 00:16:50.496 "raid_level": "raid1", 00:16:50.496 "superblock": true, 00:16:50.496 "num_base_bdevs": 2, 00:16:50.496 "num_base_bdevs_discovered": 1, 00:16:50.496 "num_base_bdevs_operational": 2, 00:16:50.496 "base_bdevs_list": [ 00:16:50.496 { 00:16:50.496 "name": "BaseBdev1", 00:16:50.496 "uuid": "7c0e8e1c-475e-4460-8c54-5acdb172a7f5", 00:16:50.496 "is_configured": true, 00:16:50.496 "data_offset": 256, 00:16:50.496 "data_size": 7936 00:16:50.496 }, 00:16:50.496 { 00:16:50.496 "name": "BaseBdev2", 00:16:50.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.496 "is_configured": false, 00:16:50.496 "data_offset": 0, 00:16:50.496 "data_size": 0 00:16:50.496 } 00:16:50.496 ] 00:16:50.496 }' 00:16:50.496 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.496 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.756 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:50.756 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.756 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.756 [2024-12-13 04:32:50.706067] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:50.756 [2024-12-13 04:32:50.706110] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:16:50.756 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.756 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:50.756 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.756 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.756 [2024-12-13 04:32:50.718097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:50.756 [2024-12-13 04:32:50.720177] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:50.756 [2024-12-13 04:32:50.720218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:50.756 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.756 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:50.756 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:50.756 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:50.756 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:50.756 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:50.756 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:50.756 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:50.756 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:50.756 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.756 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.756 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.756 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.756 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.756 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.756 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.756 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.756 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.016 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.016 "name": "Existed_Raid", 00:16:51.016 "uuid": "033d0483-6789-4f43-b321-1db5879f4af0", 00:16:51.016 "strip_size_kb": 0, 00:16:51.016 "state": "configuring", 00:16:51.016 "raid_level": "raid1", 00:16:51.016 "superblock": true, 00:16:51.016 "num_base_bdevs": 2, 00:16:51.016 "num_base_bdevs_discovered": 1, 00:16:51.016 "num_base_bdevs_operational": 2, 00:16:51.016 "base_bdevs_list": [ 00:16:51.016 { 00:16:51.016 "name": "BaseBdev1", 00:16:51.016 "uuid": "7c0e8e1c-475e-4460-8c54-5acdb172a7f5", 00:16:51.016 "is_configured": true, 00:16:51.016 "data_offset": 256, 00:16:51.016 "data_size": 7936 00:16:51.016 }, 00:16:51.016 { 00:16:51.016 "name": "BaseBdev2", 00:16:51.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.016 "is_configured": false, 00:16:51.016 "data_offset": 0, 00:16:51.016 "data_size": 0 00:16:51.016 } 00:16:51.016 ] 00:16:51.016 }' 00:16:51.016 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.016 04:32:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:51.276 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:16:51.276 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.276 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:51.276 [2024-12-13 04:32:51.202183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:51.276 [2024-12-13 04:32:51.202454] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:16:51.276 [2024-12-13 04:32:51.202513] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:51.276 [2024-12-13 04:32:51.202659] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:16:51.276 [2024-12-13 04:32:51.202779] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:16:51.276 [2024-12-13 04:32:51.202829] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:16:51.276 [2024-12-13 04:32:51.202928] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:51.276 BaseBdev2 00:16:51.276 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.276 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:51.276 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:51.276 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:51.276 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:16:51.276 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:51.276 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:51.276 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:51.276 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.276 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:51.276 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.276 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:51.276 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.276 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:51.276 [ 00:16:51.276 { 00:16:51.276 "name": "BaseBdev2", 00:16:51.276 "aliases": [ 00:16:51.276 "92ec5cf9-884f-4e39-b8be-2eb1cb68b623" 00:16:51.276 ], 00:16:51.276 "product_name": "Malloc disk", 00:16:51.276 "block_size": 4128, 00:16:51.276 "num_blocks": 8192, 00:16:51.277 "uuid": "92ec5cf9-884f-4e39-b8be-2eb1cb68b623", 00:16:51.277 "md_size": 32, 00:16:51.277 "md_interleave": true, 00:16:51.277 "dif_type": 0, 00:16:51.277 "assigned_rate_limits": { 00:16:51.277 "rw_ios_per_sec": 0, 00:16:51.277 "rw_mbytes_per_sec": 0, 00:16:51.277 "r_mbytes_per_sec": 0, 00:16:51.277 "w_mbytes_per_sec": 0 00:16:51.277 }, 00:16:51.277 "claimed": true, 00:16:51.277 "claim_type": "exclusive_write", 00:16:51.277 "zoned": false, 00:16:51.277 "supported_io_types": { 00:16:51.277 "read": true, 00:16:51.277 "write": true, 00:16:51.277 "unmap": true, 00:16:51.277 "flush": true, 00:16:51.277 "reset": true, 00:16:51.277 "nvme_admin": false, 00:16:51.277 "nvme_io": false, 00:16:51.277 "nvme_io_md": false, 00:16:51.277 "write_zeroes": true, 00:16:51.277 "zcopy": true, 00:16:51.277 "get_zone_info": false, 00:16:51.277 "zone_management": false, 00:16:51.277 "zone_append": false, 00:16:51.277 "compare": false, 00:16:51.277 "compare_and_write": false, 00:16:51.277 "abort": true, 00:16:51.277 "seek_hole": false, 00:16:51.277 "seek_data": false, 00:16:51.277 "copy": true, 00:16:51.277 "nvme_iov_md": false 00:16:51.277 }, 00:16:51.277 "memory_domains": [ 00:16:51.277 { 00:16:51.277 "dma_device_id": "system", 00:16:51.277 "dma_device_type": 1 00:16:51.277 }, 00:16:51.277 { 00:16:51.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.277 "dma_device_type": 2 00:16:51.277 } 00:16:51.277 ], 00:16:51.277 "driver_specific": {} 00:16:51.277 } 00:16:51.277 ] 00:16:51.277 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.277 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:16:51.277 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:51.277 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:51.277 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:51.277 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:51.277 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:51.277 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:51.277 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:51.277 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:51.277 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.277 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.277 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.277 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.277 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.277 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.277 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.277 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:51.277 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.537 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.537 "name": "Existed_Raid", 00:16:51.537 "uuid": "033d0483-6789-4f43-b321-1db5879f4af0", 00:16:51.537 "strip_size_kb": 0, 00:16:51.537 "state": "online", 00:16:51.537 "raid_level": "raid1", 00:16:51.537 "superblock": true, 00:16:51.537 "num_base_bdevs": 2, 00:16:51.537 "num_base_bdevs_discovered": 2, 00:16:51.537 "num_base_bdevs_operational": 2, 00:16:51.537 "base_bdevs_list": [ 00:16:51.537 { 00:16:51.537 "name": "BaseBdev1", 00:16:51.537 "uuid": "7c0e8e1c-475e-4460-8c54-5acdb172a7f5", 00:16:51.537 "is_configured": true, 00:16:51.537 "data_offset": 256, 00:16:51.537 "data_size": 7936 00:16:51.537 }, 00:16:51.537 { 00:16:51.537 "name": "BaseBdev2", 00:16:51.537 "uuid": "92ec5cf9-884f-4e39-b8be-2eb1cb68b623", 00:16:51.537 "is_configured": true, 00:16:51.537 "data_offset": 256, 00:16:51.537 "data_size": 7936 00:16:51.537 } 00:16:51.537 ] 00:16:51.537 }' 00:16:51.537 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.537 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:51.797 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:51.797 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:51.797 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:51.797 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:51.797 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:16:51.797 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:51.797 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:51.797 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:51.797 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.797 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:51.797 [2024-12-13 04:32:51.673685] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:51.797 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.797 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:51.797 "name": "Existed_Raid", 00:16:51.797 "aliases": [ 00:16:51.797 "033d0483-6789-4f43-b321-1db5879f4af0" 00:16:51.797 ], 00:16:51.797 "product_name": "Raid Volume", 00:16:51.797 "block_size": 4128, 00:16:51.797 "num_blocks": 7936, 00:16:51.797 "uuid": "033d0483-6789-4f43-b321-1db5879f4af0", 00:16:51.797 "md_size": 32, 00:16:51.797 "md_interleave": true, 00:16:51.797 "dif_type": 0, 00:16:51.797 "assigned_rate_limits": { 00:16:51.797 "rw_ios_per_sec": 0, 00:16:51.797 "rw_mbytes_per_sec": 0, 00:16:51.797 "r_mbytes_per_sec": 0, 00:16:51.797 "w_mbytes_per_sec": 0 00:16:51.797 }, 00:16:51.797 "claimed": false, 00:16:51.797 "zoned": false, 00:16:51.797 "supported_io_types": { 00:16:51.797 "read": true, 00:16:51.797 "write": true, 00:16:51.797 "unmap": false, 00:16:51.797 "flush": false, 00:16:51.797 "reset": true, 00:16:51.797 "nvme_admin": false, 00:16:51.797 "nvme_io": false, 00:16:51.797 "nvme_io_md": false, 00:16:51.797 "write_zeroes": true, 00:16:51.797 "zcopy": false, 00:16:51.797 "get_zone_info": false, 00:16:51.797 "zone_management": false, 00:16:51.797 "zone_append": false, 00:16:51.797 "compare": false, 00:16:51.797 "compare_and_write": false, 00:16:51.797 "abort": false, 00:16:51.797 "seek_hole": false, 00:16:51.797 "seek_data": false, 00:16:51.797 "copy": false, 00:16:51.797 "nvme_iov_md": false 00:16:51.797 }, 00:16:51.797 "memory_domains": [ 00:16:51.797 { 00:16:51.797 "dma_device_id": "system", 00:16:51.797 "dma_device_type": 1 00:16:51.797 }, 00:16:51.797 { 00:16:51.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.797 "dma_device_type": 2 00:16:51.797 }, 00:16:51.797 { 00:16:51.797 "dma_device_id": "system", 00:16:51.797 "dma_device_type": 1 00:16:51.797 }, 00:16:51.797 { 00:16:51.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.797 "dma_device_type": 2 00:16:51.797 } 00:16:51.797 ], 00:16:51.797 "driver_specific": { 00:16:51.797 "raid": { 00:16:51.797 "uuid": "033d0483-6789-4f43-b321-1db5879f4af0", 00:16:51.797 "strip_size_kb": 0, 00:16:51.797 "state": "online", 00:16:51.797 "raid_level": "raid1", 00:16:51.797 "superblock": true, 00:16:51.797 "num_base_bdevs": 2, 00:16:51.797 "num_base_bdevs_discovered": 2, 00:16:51.797 "num_base_bdevs_operational": 2, 00:16:51.797 "base_bdevs_list": [ 00:16:51.797 { 00:16:51.797 "name": "BaseBdev1", 00:16:51.797 "uuid": "7c0e8e1c-475e-4460-8c54-5acdb172a7f5", 00:16:51.797 "is_configured": true, 00:16:51.797 "data_offset": 256, 00:16:51.797 "data_size": 7936 00:16:51.797 }, 00:16:51.797 { 00:16:51.797 "name": "BaseBdev2", 00:16:51.797 "uuid": "92ec5cf9-884f-4e39-b8be-2eb1cb68b623", 00:16:51.797 "is_configured": true, 00:16:51.797 "data_offset": 256, 00:16:51.797 "data_size": 7936 00:16:51.797 } 00:16:51.797 ] 00:16:51.797 } 00:16:51.797 } 00:16:51.797 }' 00:16:51.797 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:51.797 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:51.797 BaseBdev2' 00:16:51.798 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:51.798 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:16:51.798 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:51.798 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:51.798 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:51.798 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.798 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:52.058 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.058 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:52.058 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:52.058 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:52.058 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:52.058 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:52.058 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.058 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:52.058 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.058 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:52.058 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:52.058 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:52.058 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.058 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:52.058 [2024-12-13 04:32:51.905105] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:52.058 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.058 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:52.058 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:52.058 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:52.058 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:16:52.058 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:52.058 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:52.058 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:52.058 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:52.058 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:52.058 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:52.058 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:52.058 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.058 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.058 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.058 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.058 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.058 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.058 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:52.058 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.058 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.058 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.058 "name": "Existed_Raid", 00:16:52.058 "uuid": "033d0483-6789-4f43-b321-1db5879f4af0", 00:16:52.058 "strip_size_kb": 0, 00:16:52.058 "state": "online", 00:16:52.058 "raid_level": "raid1", 00:16:52.058 "superblock": true, 00:16:52.058 "num_base_bdevs": 2, 00:16:52.058 "num_base_bdevs_discovered": 1, 00:16:52.058 "num_base_bdevs_operational": 1, 00:16:52.058 "base_bdevs_list": [ 00:16:52.058 { 00:16:52.058 "name": null, 00:16:52.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.058 "is_configured": false, 00:16:52.058 "data_offset": 0, 00:16:52.058 "data_size": 7936 00:16:52.058 }, 00:16:52.058 { 00:16:52.058 "name": "BaseBdev2", 00:16:52.058 "uuid": "92ec5cf9-884f-4e39-b8be-2eb1cb68b623", 00:16:52.058 "is_configured": true, 00:16:52.058 "data_offset": 256, 00:16:52.058 "data_size": 7936 00:16:52.058 } 00:16:52.058 ] 00:16:52.058 }' 00:16:52.058 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.058 04:32:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:52.666 04:32:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:52.666 04:32:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:52.666 04:32:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.666 04:32:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:52.666 04:32:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.666 04:32:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:52.666 04:32:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.666 04:32:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:52.666 04:32:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:52.666 04:32:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:52.666 04:32:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.666 04:32:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:52.666 [2024-12-13 04:32:52.453038] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:52.666 [2024-12-13 04:32:52.453188] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:52.666 [2024-12-13 04:32:52.475054] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:52.667 [2024-12-13 04:32:52.475105] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:52.667 [2024-12-13 04:32:52.475117] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:16:52.667 04:32:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.667 04:32:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:52.667 04:32:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:52.667 04:32:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.667 04:32:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:52.667 04:32:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.667 04:32:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:52.667 04:32:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.667 04:32:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:52.667 04:32:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:52.667 04:32:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:52.667 04:32:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 100559 00:16:52.667 04:32:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 100559 ']' 00:16:52.667 04:32:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 100559 00:16:52.667 04:32:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:16:52.667 04:32:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:52.667 04:32:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100559 00:16:52.667 04:32:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:52.667 killing process with pid 100559 00:16:52.667 04:32:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:52.667 04:32:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100559' 00:16:52.667 04:32:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 100559 00:16:52.667 [2024-12-13 04:32:52.569914] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:52.667 04:32:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 100559 00:16:52.667 [2024-12-13 04:32:52.571505] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:52.943 04:32:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:16:52.943 00:16:52.943 real 0m3.998s 00:16:52.943 user 0m6.138s 00:16:52.943 sys 0m0.896s 00:16:52.943 ************************************ 00:16:52.943 END TEST raid_state_function_test_sb_md_interleaved 00:16:52.944 ************************************ 00:16:52.944 04:32:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:52.944 04:32:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:53.204 04:32:52 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:16:53.204 04:32:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:53.204 04:32:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:53.204 04:32:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:53.204 ************************************ 00:16:53.204 START TEST raid_superblock_test_md_interleaved 00:16:53.204 ************************************ 00:16:53.204 04:32:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:16:53.204 04:32:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:53.204 04:32:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:16:53.204 04:32:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:53.204 04:32:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:53.204 04:32:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:53.204 04:32:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:53.204 04:32:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:53.204 04:32:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:53.204 04:32:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:53.204 04:32:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:53.204 04:32:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:53.204 04:32:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:53.204 04:32:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:53.204 04:32:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:53.204 04:32:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:53.204 04:32:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=100800 00:16:53.204 04:32:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:53.204 04:32:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 100800 00:16:53.204 04:32:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 100800 ']' 00:16:53.204 04:32:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:53.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:53.204 04:32:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:53.204 04:32:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:53.204 04:32:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:53.204 04:32:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:53.204 [2024-12-13 04:32:53.076252] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:16:53.204 [2024-12-13 04:32:53.076388] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100800 ] 00:16:53.464 [2024-12-13 04:32:53.234508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.464 [2024-12-13 04:32:53.275385] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.464 [2024-12-13 04:32:53.352578] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:53.464 [2024-12-13 04:32:53.352700] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.040 malloc1 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.040 [2024-12-13 04:32:53.910906] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:54.040 [2024-12-13 04:32:53.911041] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:54.040 [2024-12-13 04:32:53.911089] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:16:54.040 [2024-12-13 04:32:53.911123] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:54.040 [2024-12-13 04:32:53.913423] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:54.040 [2024-12-13 04:32:53.913514] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:54.040 pt1 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.040 malloc2 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.040 [2024-12-13 04:32:53.950122] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:54.040 [2024-12-13 04:32:53.950222] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:54.040 [2024-12-13 04:32:53.950258] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:54.040 [2024-12-13 04:32:53.950287] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:54.040 [2024-12-13 04:32:53.952445] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:54.040 [2024-12-13 04:32:53.952554] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:54.040 pt2 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.040 [2024-12-13 04:32:53.962127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:54.040 [2024-12-13 04:32:53.964259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:54.040 [2024-12-13 04:32:53.964427] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:16:54.040 [2024-12-13 04:32:53.964458] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:54.040 [2024-12-13 04:32:53.964546] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:16:54.040 [2024-12-13 04:32:53.964627] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:16:54.040 [2024-12-13 04:32:53.964637] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:16:54.040 [2024-12-13 04:32:53.964712] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.040 04:32:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.040 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.040 "name": "raid_bdev1", 00:16:54.040 "uuid": "10329a8a-4965-495b-8628-3b3c001a03c0", 00:16:54.040 "strip_size_kb": 0, 00:16:54.040 "state": "online", 00:16:54.040 "raid_level": "raid1", 00:16:54.040 "superblock": true, 00:16:54.040 "num_base_bdevs": 2, 00:16:54.040 "num_base_bdevs_discovered": 2, 00:16:54.040 "num_base_bdevs_operational": 2, 00:16:54.040 "base_bdevs_list": [ 00:16:54.040 { 00:16:54.040 "name": "pt1", 00:16:54.040 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:54.040 "is_configured": true, 00:16:54.040 "data_offset": 256, 00:16:54.040 "data_size": 7936 00:16:54.040 }, 00:16:54.040 { 00:16:54.040 "name": "pt2", 00:16:54.041 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:54.041 "is_configured": true, 00:16:54.041 "data_offset": 256, 00:16:54.041 "data_size": 7936 00:16:54.041 } 00:16:54.041 ] 00:16:54.041 }' 00:16:54.041 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.041 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.609 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:54.609 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:54.609 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:54.609 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:54.609 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:16:54.609 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:54.609 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:54.609 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:54.609 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.609 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.609 [2024-12-13 04:32:54.418758] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:54.609 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.609 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:54.609 "name": "raid_bdev1", 00:16:54.609 "aliases": [ 00:16:54.609 "10329a8a-4965-495b-8628-3b3c001a03c0" 00:16:54.609 ], 00:16:54.609 "product_name": "Raid Volume", 00:16:54.609 "block_size": 4128, 00:16:54.609 "num_blocks": 7936, 00:16:54.609 "uuid": "10329a8a-4965-495b-8628-3b3c001a03c0", 00:16:54.609 "md_size": 32, 00:16:54.609 "md_interleave": true, 00:16:54.609 "dif_type": 0, 00:16:54.609 "assigned_rate_limits": { 00:16:54.609 "rw_ios_per_sec": 0, 00:16:54.609 "rw_mbytes_per_sec": 0, 00:16:54.609 "r_mbytes_per_sec": 0, 00:16:54.609 "w_mbytes_per_sec": 0 00:16:54.609 }, 00:16:54.609 "claimed": false, 00:16:54.609 "zoned": false, 00:16:54.609 "supported_io_types": { 00:16:54.609 "read": true, 00:16:54.609 "write": true, 00:16:54.609 "unmap": false, 00:16:54.609 "flush": false, 00:16:54.609 "reset": true, 00:16:54.609 "nvme_admin": false, 00:16:54.609 "nvme_io": false, 00:16:54.609 "nvme_io_md": false, 00:16:54.609 "write_zeroes": true, 00:16:54.609 "zcopy": false, 00:16:54.609 "get_zone_info": false, 00:16:54.609 "zone_management": false, 00:16:54.609 "zone_append": false, 00:16:54.609 "compare": false, 00:16:54.609 "compare_and_write": false, 00:16:54.609 "abort": false, 00:16:54.609 "seek_hole": false, 00:16:54.609 "seek_data": false, 00:16:54.609 "copy": false, 00:16:54.609 "nvme_iov_md": false 00:16:54.609 }, 00:16:54.609 "memory_domains": [ 00:16:54.609 { 00:16:54.609 "dma_device_id": "system", 00:16:54.609 "dma_device_type": 1 00:16:54.609 }, 00:16:54.609 { 00:16:54.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.609 "dma_device_type": 2 00:16:54.609 }, 00:16:54.609 { 00:16:54.609 "dma_device_id": "system", 00:16:54.609 "dma_device_type": 1 00:16:54.609 }, 00:16:54.609 { 00:16:54.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.609 "dma_device_type": 2 00:16:54.609 } 00:16:54.609 ], 00:16:54.609 "driver_specific": { 00:16:54.609 "raid": { 00:16:54.609 "uuid": "10329a8a-4965-495b-8628-3b3c001a03c0", 00:16:54.609 "strip_size_kb": 0, 00:16:54.609 "state": "online", 00:16:54.609 "raid_level": "raid1", 00:16:54.609 "superblock": true, 00:16:54.609 "num_base_bdevs": 2, 00:16:54.609 "num_base_bdevs_discovered": 2, 00:16:54.610 "num_base_bdevs_operational": 2, 00:16:54.610 "base_bdevs_list": [ 00:16:54.610 { 00:16:54.610 "name": "pt1", 00:16:54.610 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:54.610 "is_configured": true, 00:16:54.610 "data_offset": 256, 00:16:54.610 "data_size": 7936 00:16:54.610 }, 00:16:54.610 { 00:16:54.610 "name": "pt2", 00:16:54.610 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:54.610 "is_configured": true, 00:16:54.610 "data_offset": 256, 00:16:54.610 "data_size": 7936 00:16:54.610 } 00:16:54.610 ] 00:16:54.610 } 00:16:54.610 } 00:16:54.610 }' 00:16:54.610 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:54.610 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:54.610 pt2' 00:16:54.610 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.610 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:16:54.610 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:54.610 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:54.610 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.610 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.610 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.610 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.610 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:54.610 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:54.610 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:54.610 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:54.610 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.610 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.610 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.610 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.870 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:54.870 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:54.870 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:54.870 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.870 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:54.870 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.870 [2024-12-13 04:32:54.658161] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:54.870 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.870 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=10329a8a-4965-495b-8628-3b3c001a03c0 00:16:54.870 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 10329a8a-4965-495b-8628-3b3c001a03c0 ']' 00:16:54.870 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:54.870 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.870 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.870 [2024-12-13 04:32:54.705876] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:54.870 [2024-12-13 04:32:54.705936] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:54.871 [2024-12-13 04:32:54.706035] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:54.871 [2024-12-13 04:32:54.706124] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:54.871 [2024-12-13 04:32:54.706177] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:16:54.871 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.871 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.871 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:54.871 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.871 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.871 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.871 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:54.871 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:54.871 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:54.871 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:54.871 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.871 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.871 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.871 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:54.871 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:54.871 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.871 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.871 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.871 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:54.871 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.871 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:54.871 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.871 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.871 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:54.871 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:54.871 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:16:54.871 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:54.871 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:54.871 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:54.871 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:54.871 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:54.871 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:54.871 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.871 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.871 [2024-12-13 04:32:54.841639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:54.871 [2024-12-13 04:32:54.843511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:54.871 [2024-12-13 04:32:54.843620] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:54.871 [2024-12-13 04:32:54.843699] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:54.871 [2024-12-13 04:32:54.843739] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:54.871 [2024-12-13 04:32:54.843760] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:16:54.871 request: 00:16:54.871 { 00:16:54.871 "name": "raid_bdev1", 00:16:54.871 "raid_level": "raid1", 00:16:54.871 "base_bdevs": [ 00:16:54.871 "malloc1", 00:16:54.871 "malloc2" 00:16:54.871 ], 00:16:54.871 "superblock": false, 00:16:54.871 "method": "bdev_raid_create", 00:16:54.871 "req_id": 1 00:16:54.871 } 00:16:54.871 Got JSON-RPC error response 00:16:54.871 response: 00:16:54.871 { 00:16:54.871 "code": -17, 00:16:54.871 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:54.871 } 00:16:54.871 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:54.871 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:16:54.871 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:54.871 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:54.871 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:54.871 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.871 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:54.871 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.871 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.871 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.131 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:55.131 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:55.131 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:55.131 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.131 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:55.131 [2024-12-13 04:32:54.909501] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:55.131 [2024-12-13 04:32:54.909587] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:55.131 [2024-12-13 04:32:54.909621] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:55.131 [2024-12-13 04:32:54.909649] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:55.131 [2024-12-13 04:32:54.911479] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:55.131 [2024-12-13 04:32:54.911548] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:55.131 [2024-12-13 04:32:54.911641] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:55.131 [2024-12-13 04:32:54.911690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:55.131 pt1 00:16:55.131 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.131 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:55.131 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:55.131 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:55.131 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:55.131 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:55.131 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:55.131 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.131 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.131 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.131 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.131 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.131 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.131 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.131 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:55.131 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.131 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.131 "name": "raid_bdev1", 00:16:55.131 "uuid": "10329a8a-4965-495b-8628-3b3c001a03c0", 00:16:55.131 "strip_size_kb": 0, 00:16:55.131 "state": "configuring", 00:16:55.131 "raid_level": "raid1", 00:16:55.131 "superblock": true, 00:16:55.131 "num_base_bdevs": 2, 00:16:55.131 "num_base_bdevs_discovered": 1, 00:16:55.131 "num_base_bdevs_operational": 2, 00:16:55.131 "base_bdevs_list": [ 00:16:55.131 { 00:16:55.131 "name": "pt1", 00:16:55.131 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:55.131 "is_configured": true, 00:16:55.131 "data_offset": 256, 00:16:55.131 "data_size": 7936 00:16:55.131 }, 00:16:55.131 { 00:16:55.131 "name": null, 00:16:55.131 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:55.131 "is_configured": false, 00:16:55.131 "data_offset": 256, 00:16:55.131 "data_size": 7936 00:16:55.131 } 00:16:55.131 ] 00:16:55.131 }' 00:16:55.131 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.131 04:32:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:55.391 04:32:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:16:55.391 04:32:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:55.391 04:32:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:55.391 04:32:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:55.391 04:32:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.391 04:32:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:55.392 [2024-12-13 04:32:55.396646] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:55.392 [2024-12-13 04:32:55.396729] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:55.392 [2024-12-13 04:32:55.396767] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:55.392 [2024-12-13 04:32:55.396795] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:55.392 [2024-12-13 04:32:55.396972] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:55.392 [2024-12-13 04:32:55.397025] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:55.392 [2024-12-13 04:32:55.397091] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:55.392 [2024-12-13 04:32:55.397134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:55.392 [2024-12-13 04:32:55.397229] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:16:55.392 [2024-12-13 04:32:55.397264] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:55.392 [2024-12-13 04:32:55.397353] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:16:55.392 [2024-12-13 04:32:55.397455] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:16:55.392 [2024-12-13 04:32:55.397495] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:16:55.392 [2024-12-13 04:32:55.397588] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:55.392 pt2 00:16:55.392 04:32:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.392 04:32:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:55.392 04:32:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:55.392 04:32:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:55.392 04:32:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:55.392 04:32:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:55.392 04:32:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:55.392 04:32:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:55.392 04:32:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:55.392 04:32:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.392 04:32:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.392 04:32:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.392 04:32:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.651 04:32:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.651 04:32:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.651 04:32:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:55.651 04:32:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.651 04:32:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.651 04:32:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.651 "name": "raid_bdev1", 00:16:55.651 "uuid": "10329a8a-4965-495b-8628-3b3c001a03c0", 00:16:55.651 "strip_size_kb": 0, 00:16:55.651 "state": "online", 00:16:55.651 "raid_level": "raid1", 00:16:55.651 "superblock": true, 00:16:55.651 "num_base_bdevs": 2, 00:16:55.651 "num_base_bdevs_discovered": 2, 00:16:55.651 "num_base_bdevs_operational": 2, 00:16:55.651 "base_bdevs_list": [ 00:16:55.651 { 00:16:55.651 "name": "pt1", 00:16:55.651 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:55.651 "is_configured": true, 00:16:55.651 "data_offset": 256, 00:16:55.651 "data_size": 7936 00:16:55.651 }, 00:16:55.651 { 00:16:55.651 "name": "pt2", 00:16:55.651 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:55.651 "is_configured": true, 00:16:55.651 "data_offset": 256, 00:16:55.651 "data_size": 7936 00:16:55.651 } 00:16:55.651 ] 00:16:55.651 }' 00:16:55.651 04:32:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.651 04:32:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:55.910 04:32:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:55.910 04:32:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:55.910 04:32:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:55.910 04:32:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:55.910 04:32:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:16:55.910 04:32:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:55.910 04:32:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:55.910 04:32:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:55.910 04:32:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.910 04:32:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:55.910 [2024-12-13 04:32:55.892131] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:55.910 04:32:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.170 04:32:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:56.170 "name": "raid_bdev1", 00:16:56.170 "aliases": [ 00:16:56.170 "10329a8a-4965-495b-8628-3b3c001a03c0" 00:16:56.170 ], 00:16:56.170 "product_name": "Raid Volume", 00:16:56.170 "block_size": 4128, 00:16:56.170 "num_blocks": 7936, 00:16:56.170 "uuid": "10329a8a-4965-495b-8628-3b3c001a03c0", 00:16:56.170 "md_size": 32, 00:16:56.170 "md_interleave": true, 00:16:56.170 "dif_type": 0, 00:16:56.170 "assigned_rate_limits": { 00:16:56.170 "rw_ios_per_sec": 0, 00:16:56.170 "rw_mbytes_per_sec": 0, 00:16:56.170 "r_mbytes_per_sec": 0, 00:16:56.170 "w_mbytes_per_sec": 0 00:16:56.170 }, 00:16:56.170 "claimed": false, 00:16:56.170 "zoned": false, 00:16:56.170 "supported_io_types": { 00:16:56.170 "read": true, 00:16:56.170 "write": true, 00:16:56.170 "unmap": false, 00:16:56.170 "flush": false, 00:16:56.170 "reset": true, 00:16:56.170 "nvme_admin": false, 00:16:56.170 "nvme_io": false, 00:16:56.170 "nvme_io_md": false, 00:16:56.170 "write_zeroes": true, 00:16:56.170 "zcopy": false, 00:16:56.170 "get_zone_info": false, 00:16:56.170 "zone_management": false, 00:16:56.170 "zone_append": false, 00:16:56.170 "compare": false, 00:16:56.170 "compare_and_write": false, 00:16:56.170 "abort": false, 00:16:56.170 "seek_hole": false, 00:16:56.170 "seek_data": false, 00:16:56.170 "copy": false, 00:16:56.170 "nvme_iov_md": false 00:16:56.170 }, 00:16:56.170 "memory_domains": [ 00:16:56.170 { 00:16:56.170 "dma_device_id": "system", 00:16:56.170 "dma_device_type": 1 00:16:56.170 }, 00:16:56.170 { 00:16:56.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.170 "dma_device_type": 2 00:16:56.170 }, 00:16:56.170 { 00:16:56.170 "dma_device_id": "system", 00:16:56.170 "dma_device_type": 1 00:16:56.170 }, 00:16:56.170 { 00:16:56.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.170 "dma_device_type": 2 00:16:56.170 } 00:16:56.170 ], 00:16:56.170 "driver_specific": { 00:16:56.170 "raid": { 00:16:56.170 "uuid": "10329a8a-4965-495b-8628-3b3c001a03c0", 00:16:56.170 "strip_size_kb": 0, 00:16:56.170 "state": "online", 00:16:56.170 "raid_level": "raid1", 00:16:56.170 "superblock": true, 00:16:56.170 "num_base_bdevs": 2, 00:16:56.170 "num_base_bdevs_discovered": 2, 00:16:56.170 "num_base_bdevs_operational": 2, 00:16:56.170 "base_bdevs_list": [ 00:16:56.170 { 00:16:56.170 "name": "pt1", 00:16:56.170 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:56.170 "is_configured": true, 00:16:56.170 "data_offset": 256, 00:16:56.170 "data_size": 7936 00:16:56.171 }, 00:16:56.171 { 00:16:56.171 "name": "pt2", 00:16:56.171 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:56.171 "is_configured": true, 00:16:56.171 "data_offset": 256, 00:16:56.171 "data_size": 7936 00:16:56.171 } 00:16:56.171 ] 00:16:56.171 } 00:16:56.171 } 00:16:56.171 }' 00:16:56.171 04:32:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:56.171 04:32:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:56.171 pt2' 00:16:56.171 04:32:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:56.171 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:16:56.171 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:56.171 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:56.171 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.171 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:56.171 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:56.171 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.171 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:56.171 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:56.171 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:56.171 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:56.171 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:56.171 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.171 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:56.171 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.171 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:56.171 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:56.171 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:56.171 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:56.171 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.171 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:56.171 [2024-12-13 04:32:56.127704] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:56.171 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.171 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 10329a8a-4965-495b-8628-3b3c001a03c0 '!=' 10329a8a-4965-495b-8628-3b3c001a03c0 ']' 00:16:56.171 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:56.171 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:56.171 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:16:56.171 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:56.171 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.171 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:56.171 [2024-12-13 04:32:56.171418] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:56.171 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.171 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:56.171 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:56.171 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:56.171 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:56.171 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:56.171 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:56.171 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.171 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.171 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.171 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.171 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.171 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.171 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.431 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:56.431 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.431 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.431 "name": "raid_bdev1", 00:16:56.431 "uuid": "10329a8a-4965-495b-8628-3b3c001a03c0", 00:16:56.431 "strip_size_kb": 0, 00:16:56.431 "state": "online", 00:16:56.431 "raid_level": "raid1", 00:16:56.431 "superblock": true, 00:16:56.431 "num_base_bdevs": 2, 00:16:56.431 "num_base_bdevs_discovered": 1, 00:16:56.431 "num_base_bdevs_operational": 1, 00:16:56.431 "base_bdevs_list": [ 00:16:56.431 { 00:16:56.431 "name": null, 00:16:56.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.431 "is_configured": false, 00:16:56.431 "data_offset": 0, 00:16:56.431 "data_size": 7936 00:16:56.431 }, 00:16:56.431 { 00:16:56.431 "name": "pt2", 00:16:56.431 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:56.431 "is_configured": true, 00:16:56.431 "data_offset": 256, 00:16:56.431 "data_size": 7936 00:16:56.431 } 00:16:56.431 ] 00:16:56.431 }' 00:16:56.431 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.431 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:56.690 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:56.690 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.690 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:56.690 [2024-12-13 04:32:56.618611] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:56.690 [2024-12-13 04:32:56.618670] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:56.691 [2024-12-13 04:32:56.618747] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:56.691 [2024-12-13 04:32:56.618805] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:56.691 [2024-12-13 04:32:56.618836] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:16:56.691 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.691 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:56.691 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.691 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.691 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:56.691 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.691 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:56.691 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:56.691 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:56.691 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:56.691 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:56.691 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.691 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:56.691 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.691 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:56.691 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:56.691 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:56.691 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:56.691 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:16:56.691 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:56.691 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.691 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:56.691 [2024-12-13 04:32:56.694490] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:56.691 [2024-12-13 04:32:56.694569] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.691 [2024-12-13 04:32:56.694602] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:16:56.691 [2024-12-13 04:32:56.694628] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.691 [2024-12-13 04:32:56.696502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.691 [2024-12-13 04:32:56.696566] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:56.691 [2024-12-13 04:32:56.696634] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:56.691 [2024-12-13 04:32:56.696684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:56.691 [2024-12-13 04:32:56.696765] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:16:56.691 [2024-12-13 04:32:56.696822] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:56.691 [2024-12-13 04:32:56.696918] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:16:56.691 [2024-12-13 04:32:56.697009] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:16:56.691 [2024-12-13 04:32:56.697045] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:16:56.691 [2024-12-13 04:32:56.697127] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:56.691 pt2 00:16:56.691 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.691 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:56.691 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:56.691 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:56.691 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:56.691 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:56.691 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:56.691 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.691 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.691 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.691 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.951 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.951 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.951 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.951 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:56.951 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.951 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.951 "name": "raid_bdev1", 00:16:56.951 "uuid": "10329a8a-4965-495b-8628-3b3c001a03c0", 00:16:56.951 "strip_size_kb": 0, 00:16:56.951 "state": "online", 00:16:56.951 "raid_level": "raid1", 00:16:56.951 "superblock": true, 00:16:56.951 "num_base_bdevs": 2, 00:16:56.951 "num_base_bdevs_discovered": 1, 00:16:56.951 "num_base_bdevs_operational": 1, 00:16:56.951 "base_bdevs_list": [ 00:16:56.951 { 00:16:56.951 "name": null, 00:16:56.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.951 "is_configured": false, 00:16:56.951 "data_offset": 256, 00:16:56.951 "data_size": 7936 00:16:56.951 }, 00:16:56.951 { 00:16:56.951 "name": "pt2", 00:16:56.951 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:56.951 "is_configured": true, 00:16:56.951 "data_offset": 256, 00:16:56.951 "data_size": 7936 00:16:56.951 } 00:16:56.951 ] 00:16:56.951 }' 00:16:56.951 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.951 04:32:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.210 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:57.210 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.210 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.210 [2024-12-13 04:32:57.177712] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:57.210 [2024-12-13 04:32:57.177775] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:57.210 [2024-12-13 04:32:57.177857] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:57.210 [2024-12-13 04:32:57.177916] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:57.210 [2024-12-13 04:32:57.177948] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:16:57.210 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.210 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.210 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.210 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:57.210 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.210 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.470 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:57.470 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:57.470 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:16:57.470 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:57.470 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.470 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.470 [2024-12-13 04:32:57.237604] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:57.470 [2024-12-13 04:32:57.237671] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.470 [2024-12-13 04:32:57.237688] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:16:57.470 [2024-12-13 04:32:57.237701] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.470 [2024-12-13 04:32:57.239568] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.470 [2024-12-13 04:32:57.239603] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:57.470 [2024-12-13 04:32:57.239652] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:57.470 [2024-12-13 04:32:57.239686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:57.470 [2024-12-13 04:32:57.239769] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:57.470 [2024-12-13 04:32:57.239782] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:57.470 [2024-12-13 04:32:57.239804] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:16:57.470 [2024-12-13 04:32:57.239838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:57.470 [2024-12-13 04:32:57.239897] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:16:57.470 [2024-12-13 04:32:57.239917] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:57.470 [2024-12-13 04:32:57.239996] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:16:57.470 [2024-12-13 04:32:57.240050] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:16:57.470 [2024-12-13 04:32:57.240057] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:16:57.470 [2024-12-13 04:32:57.240117] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:57.470 pt1 00:16:57.470 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.470 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:16:57.470 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:57.470 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:57.470 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:57.470 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:57.470 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:57.470 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:57.470 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.470 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.470 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.470 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.470 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.470 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.470 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.470 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.470 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.470 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.470 "name": "raid_bdev1", 00:16:57.470 "uuid": "10329a8a-4965-495b-8628-3b3c001a03c0", 00:16:57.470 "strip_size_kb": 0, 00:16:57.470 "state": "online", 00:16:57.470 "raid_level": "raid1", 00:16:57.470 "superblock": true, 00:16:57.470 "num_base_bdevs": 2, 00:16:57.470 "num_base_bdevs_discovered": 1, 00:16:57.470 "num_base_bdevs_operational": 1, 00:16:57.470 "base_bdevs_list": [ 00:16:57.470 { 00:16:57.470 "name": null, 00:16:57.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.470 "is_configured": false, 00:16:57.470 "data_offset": 256, 00:16:57.470 "data_size": 7936 00:16:57.470 }, 00:16:57.470 { 00:16:57.470 "name": "pt2", 00:16:57.470 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:57.470 "is_configured": true, 00:16:57.470 "data_offset": 256, 00:16:57.470 "data_size": 7936 00:16:57.470 } 00:16:57.470 ] 00:16:57.470 }' 00:16:57.471 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.471 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.730 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:57.730 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:57.730 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.730 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.730 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.730 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:57.730 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:57.730 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:57.730 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.730 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.730 [2024-12-13 04:32:57.745008] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:57.990 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.990 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 10329a8a-4965-495b-8628-3b3c001a03c0 '!=' 10329a8a-4965-495b-8628-3b3c001a03c0 ']' 00:16:57.990 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 100800 00:16:57.990 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 100800 ']' 00:16:57.990 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 100800 00:16:57.990 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:16:57.990 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:57.990 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100800 00:16:57.990 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:57.990 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:57.990 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100800' 00:16:57.990 killing process with pid 100800 00:16:57.990 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 100800 00:16:57.990 [2024-12-13 04:32:57.826626] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:57.990 [2024-12-13 04:32:57.826688] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:57.990 [2024-12-13 04:32:57.826730] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:57.990 [2024-12-13 04:32:57.826737] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:16:57.990 04:32:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 100800 00:16:57.990 [2024-12-13 04:32:57.850899] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:58.250 ************************************ 00:16:58.250 END TEST raid_superblock_test_md_interleaved 00:16:58.250 ************************************ 00:16:58.250 04:32:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:16:58.250 00:16:58.250 real 0m5.078s 00:16:58.250 user 0m8.298s 00:16:58.250 sys 0m1.166s 00:16:58.250 04:32:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:58.250 04:32:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.250 04:32:58 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:16:58.250 04:32:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:58.250 04:32:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:58.250 04:32:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:58.250 ************************************ 00:16:58.250 START TEST raid_rebuild_test_sb_md_interleaved 00:16:58.250 ************************************ 00:16:58.250 04:32:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:16:58.250 04:32:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:58.250 04:32:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:58.250 04:32:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:58.250 04:32:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:58.250 04:32:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:16:58.250 04:32:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:58.250 04:32:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:58.250 04:32:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:58.250 04:32:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:58.250 04:32:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:58.250 04:32:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:58.250 04:32:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:58.250 04:32:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:58.250 04:32:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:58.251 04:32:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:58.251 04:32:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:58.251 04:32:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:58.251 04:32:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:58.251 04:32:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:58.251 04:32:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:58.251 04:32:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:58.251 04:32:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:58.251 04:32:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:58.251 04:32:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:58.251 04:32:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=101119 00:16:58.251 04:32:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:58.251 04:32:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 101119 00:16:58.251 04:32:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 101119 ']' 00:16:58.251 04:32:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.251 04:32:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:58.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.251 04:32:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.251 04:32:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:58.251 04:32:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.251 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:58.251 Zero copy mechanism will not be used. 00:16:58.251 [2024-12-13 04:32:58.221757] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:16:58.251 [2024-12-13 04:32:58.221903] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101119 ] 00:16:58.510 [2024-12-13 04:32:58.377803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.510 [2024-12-13 04:32:58.417316] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.510 [2024-12-13 04:32:58.496063] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:58.511 [2024-12-13 04:32:58.496104] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:59.080 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:59.080 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:16:59.080 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:59.080 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:16:59.080 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.080 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.080 BaseBdev1_malloc 00:16:59.080 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.080 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:59.080 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.080 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.080 [2024-12-13 04:32:59.063346] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:59.080 [2024-12-13 04:32:59.063417] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.080 [2024-12-13 04:32:59.063456] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:16:59.080 [2024-12-13 04:32:59.063466] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.080 [2024-12-13 04:32:59.065743] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.080 [2024-12-13 04:32:59.065778] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:59.080 BaseBdev1 00:16:59.080 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.080 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:59.080 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:16:59.080 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.080 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.080 BaseBdev2_malloc 00:16:59.080 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.080 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:59.080 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.080 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.340 [2024-12-13 04:32:59.098839] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:59.340 [2024-12-13 04:32:59.098891] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.340 [2024-12-13 04:32:59.098917] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:59.340 [2024-12-13 04:32:59.098926] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.340 [2024-12-13 04:32:59.101155] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.340 [2024-12-13 04:32:59.101210] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:59.340 BaseBdev2 00:16:59.340 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.340 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:16:59.340 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.340 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.340 spare_malloc 00:16:59.340 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.340 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:59.340 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.340 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.340 spare_delay 00:16:59.340 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.340 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:59.340 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.340 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.340 [2024-12-13 04:32:59.163814] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:59.340 [2024-12-13 04:32:59.163891] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.340 [2024-12-13 04:32:59.163928] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:59.340 [2024-12-13 04:32:59.163945] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.340 [2024-12-13 04:32:59.166828] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.340 [2024-12-13 04:32:59.166869] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:59.340 spare 00:16:59.340 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.340 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:59.340 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.340 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.340 [2024-12-13 04:32:59.175767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:59.340 [2024-12-13 04:32:59.178095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:59.340 [2024-12-13 04:32:59.178270] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:16:59.340 [2024-12-13 04:32:59.178283] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:59.340 [2024-12-13 04:32:59.178372] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:16:59.340 [2024-12-13 04:32:59.178469] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:16:59.340 [2024-12-13 04:32:59.178498] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:16:59.340 [2024-12-13 04:32:59.178574] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.340 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.340 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:59.340 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.340 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.340 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:59.341 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:59.341 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:59.341 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.341 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.341 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.341 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.341 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.341 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.341 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.341 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.341 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.341 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.341 "name": "raid_bdev1", 00:16:59.341 "uuid": "990e7977-2889-4a2f-9ecb-12fabba8e031", 00:16:59.341 "strip_size_kb": 0, 00:16:59.341 "state": "online", 00:16:59.341 "raid_level": "raid1", 00:16:59.341 "superblock": true, 00:16:59.341 "num_base_bdevs": 2, 00:16:59.341 "num_base_bdevs_discovered": 2, 00:16:59.341 "num_base_bdevs_operational": 2, 00:16:59.341 "base_bdevs_list": [ 00:16:59.341 { 00:16:59.341 "name": "BaseBdev1", 00:16:59.341 "uuid": "abe9b6b7-8777-5f31-8931-24362f8d6a25", 00:16:59.341 "is_configured": true, 00:16:59.341 "data_offset": 256, 00:16:59.341 "data_size": 7936 00:16:59.341 }, 00:16:59.341 { 00:16:59.341 "name": "BaseBdev2", 00:16:59.341 "uuid": "6b1d7b73-7857-56b8-9f59-bbf038c48d41", 00:16:59.341 "is_configured": true, 00:16:59.341 "data_offset": 256, 00:16:59.341 "data_size": 7936 00:16:59.341 } 00:16:59.341 ] 00:16:59.341 }' 00:16:59.341 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.341 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.911 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:59.911 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:59.911 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.911 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.911 [2024-12-13 04:32:59.651165] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:59.911 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.911 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:16:59.911 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.911 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.911 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.911 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:59.911 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.911 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:16:59.911 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:59.911 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:16:59.911 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:59.911 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.911 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.911 [2024-12-13 04:32:59.742740] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:59.911 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.911 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:59.911 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.911 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.911 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:59.911 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:59.911 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:59.911 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.911 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.911 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.911 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.911 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.911 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.911 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.911 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.911 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.911 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.911 "name": "raid_bdev1", 00:16:59.911 "uuid": "990e7977-2889-4a2f-9ecb-12fabba8e031", 00:16:59.911 "strip_size_kb": 0, 00:16:59.911 "state": "online", 00:16:59.911 "raid_level": "raid1", 00:16:59.911 "superblock": true, 00:16:59.911 "num_base_bdevs": 2, 00:16:59.911 "num_base_bdevs_discovered": 1, 00:16:59.911 "num_base_bdevs_operational": 1, 00:16:59.911 "base_bdevs_list": [ 00:16:59.911 { 00:16:59.911 "name": null, 00:16:59.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.911 "is_configured": false, 00:16:59.911 "data_offset": 0, 00:16:59.911 "data_size": 7936 00:16:59.911 }, 00:16:59.911 { 00:16:59.911 "name": "BaseBdev2", 00:16:59.911 "uuid": "6b1d7b73-7857-56b8-9f59-bbf038c48d41", 00:16:59.911 "is_configured": true, 00:16:59.911 "data_offset": 256, 00:16:59.911 "data_size": 7936 00:16:59.911 } 00:16:59.911 ] 00:16:59.911 }' 00:16:59.911 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.911 04:32:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:00.481 04:33:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:00.481 04:33:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.481 04:33:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:00.481 [2024-12-13 04:33:00.205957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:00.481 [2024-12-13 04:33:00.212333] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:17:00.481 04:33:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.481 04:33:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:00.481 [2024-12-13 04:33:00.214646] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:01.420 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:01.420 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.420 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:01.420 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:01.420 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.420 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.420 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.420 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.420 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.420 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.420 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.420 "name": "raid_bdev1", 00:17:01.420 "uuid": "990e7977-2889-4a2f-9ecb-12fabba8e031", 00:17:01.420 "strip_size_kb": 0, 00:17:01.420 "state": "online", 00:17:01.420 "raid_level": "raid1", 00:17:01.420 "superblock": true, 00:17:01.420 "num_base_bdevs": 2, 00:17:01.420 "num_base_bdevs_discovered": 2, 00:17:01.420 "num_base_bdevs_operational": 2, 00:17:01.420 "process": { 00:17:01.420 "type": "rebuild", 00:17:01.420 "target": "spare", 00:17:01.420 "progress": { 00:17:01.420 "blocks": 2560, 00:17:01.420 "percent": 32 00:17:01.420 } 00:17:01.420 }, 00:17:01.420 "base_bdevs_list": [ 00:17:01.420 { 00:17:01.420 "name": "spare", 00:17:01.420 "uuid": "6ba7fd70-b06f-520f-bab3-044f5ea051b9", 00:17:01.420 "is_configured": true, 00:17:01.420 "data_offset": 256, 00:17:01.420 "data_size": 7936 00:17:01.420 }, 00:17:01.420 { 00:17:01.420 "name": "BaseBdev2", 00:17:01.420 "uuid": "6b1d7b73-7857-56b8-9f59-bbf038c48d41", 00:17:01.420 "is_configured": true, 00:17:01.420 "data_offset": 256, 00:17:01.420 "data_size": 7936 00:17:01.420 } 00:17:01.420 ] 00:17:01.420 }' 00:17:01.420 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:01.420 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:01.420 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:01.420 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:01.420 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:01.420 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.420 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.420 [2024-12-13 04:33:01.354538] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:01.420 [2024-12-13 04:33:01.423179] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:01.420 [2024-12-13 04:33:01.423289] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:01.420 [2024-12-13 04:33:01.423327] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:01.420 [2024-12-13 04:33:01.423348] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:01.420 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.420 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:01.680 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:01.681 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:01.681 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:01.681 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:01.681 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:01.681 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.681 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.681 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.681 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.681 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.681 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.681 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.681 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.681 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.681 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.681 "name": "raid_bdev1", 00:17:01.681 "uuid": "990e7977-2889-4a2f-9ecb-12fabba8e031", 00:17:01.681 "strip_size_kb": 0, 00:17:01.681 "state": "online", 00:17:01.681 "raid_level": "raid1", 00:17:01.681 "superblock": true, 00:17:01.681 "num_base_bdevs": 2, 00:17:01.681 "num_base_bdevs_discovered": 1, 00:17:01.681 "num_base_bdevs_operational": 1, 00:17:01.681 "base_bdevs_list": [ 00:17:01.681 { 00:17:01.681 "name": null, 00:17:01.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.681 "is_configured": false, 00:17:01.681 "data_offset": 0, 00:17:01.681 "data_size": 7936 00:17:01.681 }, 00:17:01.681 { 00:17:01.681 "name": "BaseBdev2", 00:17:01.681 "uuid": "6b1d7b73-7857-56b8-9f59-bbf038c48d41", 00:17:01.681 "is_configured": true, 00:17:01.681 "data_offset": 256, 00:17:01.681 "data_size": 7936 00:17:01.681 } 00:17:01.681 ] 00:17:01.681 }' 00:17:01.681 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.681 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.940 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:01.940 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.940 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:01.940 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:01.941 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.941 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.941 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.941 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.941 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.941 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.941 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.941 "name": "raid_bdev1", 00:17:01.941 "uuid": "990e7977-2889-4a2f-9ecb-12fabba8e031", 00:17:01.941 "strip_size_kb": 0, 00:17:01.941 "state": "online", 00:17:01.941 "raid_level": "raid1", 00:17:01.941 "superblock": true, 00:17:01.941 "num_base_bdevs": 2, 00:17:01.941 "num_base_bdevs_discovered": 1, 00:17:01.941 "num_base_bdevs_operational": 1, 00:17:01.941 "base_bdevs_list": [ 00:17:01.941 { 00:17:01.941 "name": null, 00:17:01.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.941 "is_configured": false, 00:17:01.941 "data_offset": 0, 00:17:01.941 "data_size": 7936 00:17:01.941 }, 00:17:01.941 { 00:17:01.941 "name": "BaseBdev2", 00:17:01.941 "uuid": "6b1d7b73-7857-56b8-9f59-bbf038c48d41", 00:17:01.941 "is_configured": true, 00:17:01.941 "data_offset": 256, 00:17:01.941 "data_size": 7936 00:17:01.941 } 00:17:01.941 ] 00:17:01.941 }' 00:17:01.941 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:01.941 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:01.941 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:02.200 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:02.200 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:02.200 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.200 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.200 [2024-12-13 04:33:01.992614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:02.200 [2024-12-13 04:33:01.997433] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:17:02.200 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.200 04:33:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:02.201 [2024-12-13 04:33:01.999668] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:03.140 04:33:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:03.140 04:33:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:03.140 04:33:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:03.140 04:33:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:03.140 04:33:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:03.140 04:33:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.140 04:33:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.140 04:33:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.140 04:33:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:03.140 04:33:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.140 04:33:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:03.140 "name": "raid_bdev1", 00:17:03.140 "uuid": "990e7977-2889-4a2f-9ecb-12fabba8e031", 00:17:03.140 "strip_size_kb": 0, 00:17:03.140 "state": "online", 00:17:03.140 "raid_level": "raid1", 00:17:03.140 "superblock": true, 00:17:03.140 "num_base_bdevs": 2, 00:17:03.140 "num_base_bdevs_discovered": 2, 00:17:03.140 "num_base_bdevs_operational": 2, 00:17:03.140 "process": { 00:17:03.140 "type": "rebuild", 00:17:03.140 "target": "spare", 00:17:03.140 "progress": { 00:17:03.140 "blocks": 2560, 00:17:03.140 "percent": 32 00:17:03.140 } 00:17:03.140 }, 00:17:03.140 "base_bdevs_list": [ 00:17:03.140 { 00:17:03.140 "name": "spare", 00:17:03.140 "uuid": "6ba7fd70-b06f-520f-bab3-044f5ea051b9", 00:17:03.140 "is_configured": true, 00:17:03.140 "data_offset": 256, 00:17:03.140 "data_size": 7936 00:17:03.140 }, 00:17:03.140 { 00:17:03.140 "name": "BaseBdev2", 00:17:03.140 "uuid": "6b1d7b73-7857-56b8-9f59-bbf038c48d41", 00:17:03.140 "is_configured": true, 00:17:03.140 "data_offset": 256, 00:17:03.140 "data_size": 7936 00:17:03.140 } 00:17:03.140 ] 00:17:03.140 }' 00:17:03.140 04:33:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:03.140 04:33:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:03.140 04:33:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:03.400 04:33:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:03.400 04:33:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:03.400 04:33:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:03.400 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:03.400 04:33:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:03.400 04:33:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:03.400 04:33:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:03.400 04:33:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=632 00:17:03.400 04:33:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:03.400 04:33:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:03.400 04:33:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:03.400 04:33:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:03.400 04:33:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:03.400 04:33:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:03.400 04:33:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.400 04:33:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.400 04:33:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.400 04:33:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:03.400 04:33:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.400 04:33:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:03.400 "name": "raid_bdev1", 00:17:03.400 "uuid": "990e7977-2889-4a2f-9ecb-12fabba8e031", 00:17:03.400 "strip_size_kb": 0, 00:17:03.400 "state": "online", 00:17:03.400 "raid_level": "raid1", 00:17:03.400 "superblock": true, 00:17:03.400 "num_base_bdevs": 2, 00:17:03.400 "num_base_bdevs_discovered": 2, 00:17:03.400 "num_base_bdevs_operational": 2, 00:17:03.400 "process": { 00:17:03.400 "type": "rebuild", 00:17:03.400 "target": "spare", 00:17:03.400 "progress": { 00:17:03.400 "blocks": 2816, 00:17:03.400 "percent": 35 00:17:03.400 } 00:17:03.400 }, 00:17:03.400 "base_bdevs_list": [ 00:17:03.400 { 00:17:03.400 "name": "spare", 00:17:03.400 "uuid": "6ba7fd70-b06f-520f-bab3-044f5ea051b9", 00:17:03.400 "is_configured": true, 00:17:03.400 "data_offset": 256, 00:17:03.400 "data_size": 7936 00:17:03.400 }, 00:17:03.400 { 00:17:03.400 "name": "BaseBdev2", 00:17:03.400 "uuid": "6b1d7b73-7857-56b8-9f59-bbf038c48d41", 00:17:03.400 "is_configured": true, 00:17:03.400 "data_offset": 256, 00:17:03.400 "data_size": 7936 00:17:03.400 } 00:17:03.400 ] 00:17:03.400 }' 00:17:03.400 04:33:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:03.400 04:33:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:03.400 04:33:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:03.400 04:33:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:03.400 04:33:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:04.340 04:33:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:04.340 04:33:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:04.340 04:33:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:04.340 04:33:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:04.340 04:33:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:04.340 04:33:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:04.340 04:33:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.340 04:33:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.340 04:33:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.340 04:33:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:04.340 04:33:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.340 04:33:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:04.340 "name": "raid_bdev1", 00:17:04.340 "uuid": "990e7977-2889-4a2f-9ecb-12fabba8e031", 00:17:04.340 "strip_size_kb": 0, 00:17:04.340 "state": "online", 00:17:04.340 "raid_level": "raid1", 00:17:04.340 "superblock": true, 00:17:04.340 "num_base_bdevs": 2, 00:17:04.340 "num_base_bdevs_discovered": 2, 00:17:04.340 "num_base_bdevs_operational": 2, 00:17:04.340 "process": { 00:17:04.340 "type": "rebuild", 00:17:04.340 "target": "spare", 00:17:04.340 "progress": { 00:17:04.340 "blocks": 5632, 00:17:04.340 "percent": 70 00:17:04.340 } 00:17:04.340 }, 00:17:04.340 "base_bdevs_list": [ 00:17:04.340 { 00:17:04.340 "name": "spare", 00:17:04.340 "uuid": "6ba7fd70-b06f-520f-bab3-044f5ea051b9", 00:17:04.340 "is_configured": true, 00:17:04.340 "data_offset": 256, 00:17:04.340 "data_size": 7936 00:17:04.340 }, 00:17:04.340 { 00:17:04.340 "name": "BaseBdev2", 00:17:04.340 "uuid": "6b1d7b73-7857-56b8-9f59-bbf038c48d41", 00:17:04.340 "is_configured": true, 00:17:04.340 "data_offset": 256, 00:17:04.340 "data_size": 7936 00:17:04.340 } 00:17:04.340 ] 00:17:04.340 }' 00:17:04.340 04:33:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:04.600 04:33:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:04.600 04:33:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:04.600 04:33:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:04.600 04:33:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:05.170 [2024-12-13 04:33:05.119322] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:05.170 [2024-12-13 04:33:05.119419] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:05.170 [2024-12-13 04:33:05.119533] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:05.430 04:33:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:05.430 04:33:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:05.430 04:33:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:05.430 04:33:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:05.430 04:33:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:05.430 04:33:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:05.430 04:33:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.430 04:33:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.430 04:33:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:05.430 04:33:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.689 04:33:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.689 04:33:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:05.689 "name": "raid_bdev1", 00:17:05.689 "uuid": "990e7977-2889-4a2f-9ecb-12fabba8e031", 00:17:05.689 "strip_size_kb": 0, 00:17:05.689 "state": "online", 00:17:05.689 "raid_level": "raid1", 00:17:05.689 "superblock": true, 00:17:05.689 "num_base_bdevs": 2, 00:17:05.689 "num_base_bdevs_discovered": 2, 00:17:05.689 "num_base_bdevs_operational": 2, 00:17:05.689 "base_bdevs_list": [ 00:17:05.689 { 00:17:05.689 "name": "spare", 00:17:05.689 "uuid": "6ba7fd70-b06f-520f-bab3-044f5ea051b9", 00:17:05.689 "is_configured": true, 00:17:05.689 "data_offset": 256, 00:17:05.689 "data_size": 7936 00:17:05.689 }, 00:17:05.689 { 00:17:05.689 "name": "BaseBdev2", 00:17:05.689 "uuid": "6b1d7b73-7857-56b8-9f59-bbf038c48d41", 00:17:05.689 "is_configured": true, 00:17:05.689 "data_offset": 256, 00:17:05.689 "data_size": 7936 00:17:05.689 } 00:17:05.689 ] 00:17:05.689 }' 00:17:05.689 04:33:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:05.689 04:33:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:05.689 04:33:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:05.689 04:33:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:05.689 04:33:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:17:05.689 04:33:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:05.689 04:33:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:05.689 04:33:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:05.689 04:33:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:05.689 04:33:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:05.689 04:33:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.689 04:33:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.689 04:33:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:05.689 04:33:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.689 04:33:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.689 04:33:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:05.689 "name": "raid_bdev1", 00:17:05.689 "uuid": "990e7977-2889-4a2f-9ecb-12fabba8e031", 00:17:05.689 "strip_size_kb": 0, 00:17:05.689 "state": "online", 00:17:05.689 "raid_level": "raid1", 00:17:05.689 "superblock": true, 00:17:05.689 "num_base_bdevs": 2, 00:17:05.689 "num_base_bdevs_discovered": 2, 00:17:05.689 "num_base_bdevs_operational": 2, 00:17:05.689 "base_bdevs_list": [ 00:17:05.689 { 00:17:05.689 "name": "spare", 00:17:05.689 "uuid": "6ba7fd70-b06f-520f-bab3-044f5ea051b9", 00:17:05.689 "is_configured": true, 00:17:05.689 "data_offset": 256, 00:17:05.689 "data_size": 7936 00:17:05.689 }, 00:17:05.689 { 00:17:05.689 "name": "BaseBdev2", 00:17:05.689 "uuid": "6b1d7b73-7857-56b8-9f59-bbf038c48d41", 00:17:05.689 "is_configured": true, 00:17:05.689 "data_offset": 256, 00:17:05.689 "data_size": 7936 00:17:05.689 } 00:17:05.689 ] 00:17:05.689 }' 00:17:05.689 04:33:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:05.689 04:33:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:05.689 04:33:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:05.949 04:33:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:05.949 04:33:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:05.949 04:33:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:05.949 04:33:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:05.949 04:33:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:05.949 04:33:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:05.949 04:33:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:05.949 04:33:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.949 04:33:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.949 04:33:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.949 04:33:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.949 04:33:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.949 04:33:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.949 04:33:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.949 04:33:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:05.949 04:33:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.949 04:33:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.949 "name": "raid_bdev1", 00:17:05.949 "uuid": "990e7977-2889-4a2f-9ecb-12fabba8e031", 00:17:05.949 "strip_size_kb": 0, 00:17:05.949 "state": "online", 00:17:05.949 "raid_level": "raid1", 00:17:05.949 "superblock": true, 00:17:05.949 "num_base_bdevs": 2, 00:17:05.949 "num_base_bdevs_discovered": 2, 00:17:05.949 "num_base_bdevs_operational": 2, 00:17:05.949 "base_bdevs_list": [ 00:17:05.949 { 00:17:05.949 "name": "spare", 00:17:05.949 "uuid": "6ba7fd70-b06f-520f-bab3-044f5ea051b9", 00:17:05.949 "is_configured": true, 00:17:05.949 "data_offset": 256, 00:17:05.949 "data_size": 7936 00:17:05.949 }, 00:17:05.949 { 00:17:05.949 "name": "BaseBdev2", 00:17:05.949 "uuid": "6b1d7b73-7857-56b8-9f59-bbf038c48d41", 00:17:05.949 "is_configured": true, 00:17:05.949 "data_offset": 256, 00:17:05.949 "data_size": 7936 00:17:05.949 } 00:17:05.949 ] 00:17:05.949 }' 00:17:05.949 04:33:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.949 04:33:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:06.210 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:06.210 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.210 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:06.210 [2024-12-13 04:33:06.083388] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:06.210 [2024-12-13 04:33:06.083417] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:06.210 [2024-12-13 04:33:06.083522] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:06.210 [2024-12-13 04:33:06.083593] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:06.210 [2024-12-13 04:33:06.083606] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:17:06.210 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.210 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.210 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:17:06.210 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.210 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:06.210 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.210 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:06.210 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:17:06.210 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:06.210 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:06.210 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.210 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:06.210 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.210 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:06.210 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.210 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:06.210 [2024-12-13 04:33:06.159269] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:06.210 [2024-12-13 04:33:06.159325] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.210 [2024-12-13 04:33:06.159345] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:06.210 [2024-12-13 04:33:06.159356] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.210 [2024-12-13 04:33:06.161656] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.210 [2024-12-13 04:33:06.161695] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:06.210 [2024-12-13 04:33:06.161747] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:06.210 [2024-12-13 04:33:06.161799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:06.210 [2024-12-13 04:33:06.161897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:06.210 spare 00:17:06.210 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.210 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:06.210 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.210 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:06.470 [2024-12-13 04:33:06.261789] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:17:06.470 [2024-12-13 04:33:06.261813] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:06.470 [2024-12-13 04:33:06.261913] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:17:06.470 [2024-12-13 04:33:06.262001] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:17:06.470 [2024-12-13 04:33:06.262012] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:17:06.470 [2024-12-13 04:33:06.262088] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:06.470 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.470 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:06.470 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.470 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:06.470 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:06.470 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:06.470 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:06.470 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.470 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.470 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.470 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.470 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.470 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.470 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.470 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:06.470 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.470 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.470 "name": "raid_bdev1", 00:17:06.470 "uuid": "990e7977-2889-4a2f-9ecb-12fabba8e031", 00:17:06.470 "strip_size_kb": 0, 00:17:06.470 "state": "online", 00:17:06.470 "raid_level": "raid1", 00:17:06.470 "superblock": true, 00:17:06.470 "num_base_bdevs": 2, 00:17:06.470 "num_base_bdevs_discovered": 2, 00:17:06.470 "num_base_bdevs_operational": 2, 00:17:06.470 "base_bdevs_list": [ 00:17:06.470 { 00:17:06.470 "name": "spare", 00:17:06.470 "uuid": "6ba7fd70-b06f-520f-bab3-044f5ea051b9", 00:17:06.470 "is_configured": true, 00:17:06.470 "data_offset": 256, 00:17:06.470 "data_size": 7936 00:17:06.470 }, 00:17:06.470 { 00:17:06.470 "name": "BaseBdev2", 00:17:06.470 "uuid": "6b1d7b73-7857-56b8-9f59-bbf038c48d41", 00:17:06.470 "is_configured": true, 00:17:06.470 "data_offset": 256, 00:17:06.470 "data_size": 7936 00:17:06.470 } 00:17:06.470 ] 00:17:06.470 }' 00:17:06.470 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.470 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:06.730 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:06.730 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.730 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:06.730 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:06.730 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.730 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.730 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.730 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:06.730 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.730 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.990 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:06.990 "name": "raid_bdev1", 00:17:06.990 "uuid": "990e7977-2889-4a2f-9ecb-12fabba8e031", 00:17:06.990 "strip_size_kb": 0, 00:17:06.990 "state": "online", 00:17:06.990 "raid_level": "raid1", 00:17:06.990 "superblock": true, 00:17:06.990 "num_base_bdevs": 2, 00:17:06.990 "num_base_bdevs_discovered": 2, 00:17:06.990 "num_base_bdevs_operational": 2, 00:17:06.990 "base_bdevs_list": [ 00:17:06.990 { 00:17:06.990 "name": "spare", 00:17:06.990 "uuid": "6ba7fd70-b06f-520f-bab3-044f5ea051b9", 00:17:06.990 "is_configured": true, 00:17:06.990 "data_offset": 256, 00:17:06.990 "data_size": 7936 00:17:06.990 }, 00:17:06.990 { 00:17:06.990 "name": "BaseBdev2", 00:17:06.990 "uuid": "6b1d7b73-7857-56b8-9f59-bbf038c48d41", 00:17:06.990 "is_configured": true, 00:17:06.990 "data_offset": 256, 00:17:06.990 "data_size": 7936 00:17:06.990 } 00:17:06.990 ] 00:17:06.990 }' 00:17:06.990 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:06.990 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:06.990 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:06.990 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:06.990 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.990 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.990 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:06.990 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:06.990 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.990 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:06.990 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:06.990 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.990 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:06.990 [2024-12-13 04:33:06.910009] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:06.990 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.990 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:06.990 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.990 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:06.990 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:06.990 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:06.990 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:06.990 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.990 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.990 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.990 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.990 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.990 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.990 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.990 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:06.990 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.990 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.990 "name": "raid_bdev1", 00:17:06.990 "uuid": "990e7977-2889-4a2f-9ecb-12fabba8e031", 00:17:06.990 "strip_size_kb": 0, 00:17:06.990 "state": "online", 00:17:06.990 "raid_level": "raid1", 00:17:06.990 "superblock": true, 00:17:06.990 "num_base_bdevs": 2, 00:17:06.990 "num_base_bdevs_discovered": 1, 00:17:06.990 "num_base_bdevs_operational": 1, 00:17:06.990 "base_bdevs_list": [ 00:17:06.990 { 00:17:06.990 "name": null, 00:17:06.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.990 "is_configured": false, 00:17:06.990 "data_offset": 0, 00:17:06.990 "data_size": 7936 00:17:06.990 }, 00:17:06.990 { 00:17:06.990 "name": "BaseBdev2", 00:17:06.990 "uuid": "6b1d7b73-7857-56b8-9f59-bbf038c48d41", 00:17:06.990 "is_configured": true, 00:17:06.990 "data_offset": 256, 00:17:06.990 "data_size": 7936 00:17:06.990 } 00:17:06.990 ] 00:17:06.990 }' 00:17:06.990 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.990 04:33:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:07.560 04:33:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:07.560 04:33:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.560 04:33:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:07.560 [2024-12-13 04:33:07.329293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:07.560 [2024-12-13 04:33:07.329506] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:07.560 [2024-12-13 04:33:07.329585] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:07.560 [2024-12-13 04:33:07.329660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:07.560 [2024-12-13 04:33:07.335529] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:17:07.560 04:33:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.560 04:33:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:07.560 [2024-12-13 04:33:07.337769] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:08.500 04:33:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:08.500 04:33:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.500 04:33:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:08.500 04:33:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:08.500 04:33:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.500 04:33:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.500 04:33:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.500 04:33:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.500 04:33:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:08.500 04:33:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.500 04:33:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.500 "name": "raid_bdev1", 00:17:08.500 "uuid": "990e7977-2889-4a2f-9ecb-12fabba8e031", 00:17:08.500 "strip_size_kb": 0, 00:17:08.500 "state": "online", 00:17:08.500 "raid_level": "raid1", 00:17:08.500 "superblock": true, 00:17:08.500 "num_base_bdevs": 2, 00:17:08.500 "num_base_bdevs_discovered": 2, 00:17:08.500 "num_base_bdevs_operational": 2, 00:17:08.500 "process": { 00:17:08.500 "type": "rebuild", 00:17:08.500 "target": "spare", 00:17:08.500 "progress": { 00:17:08.500 "blocks": 2560, 00:17:08.500 "percent": 32 00:17:08.500 } 00:17:08.500 }, 00:17:08.500 "base_bdevs_list": [ 00:17:08.500 { 00:17:08.500 "name": "spare", 00:17:08.500 "uuid": "6ba7fd70-b06f-520f-bab3-044f5ea051b9", 00:17:08.500 "is_configured": true, 00:17:08.500 "data_offset": 256, 00:17:08.500 "data_size": 7936 00:17:08.500 }, 00:17:08.500 { 00:17:08.500 "name": "BaseBdev2", 00:17:08.500 "uuid": "6b1d7b73-7857-56b8-9f59-bbf038c48d41", 00:17:08.500 "is_configured": true, 00:17:08.500 "data_offset": 256, 00:17:08.500 "data_size": 7936 00:17:08.500 } 00:17:08.500 ] 00:17:08.500 }' 00:17:08.500 04:33:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.500 04:33:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:08.500 04:33:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.500 04:33:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:08.500 04:33:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:08.500 04:33:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.500 04:33:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:08.500 [2024-12-13 04:33:08.490624] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:08.759 [2024-12-13 04:33:08.545551] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:08.759 [2024-12-13 04:33:08.545603] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:08.759 [2024-12-13 04:33:08.545622] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:08.759 [2024-12-13 04:33:08.545629] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:08.759 04:33:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.759 04:33:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:08.759 04:33:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.759 04:33:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.759 04:33:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:08.759 04:33:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:08.759 04:33:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:08.759 04:33:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.760 04:33:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.760 04:33:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.760 04:33:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.760 04:33:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.760 04:33:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.760 04:33:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.760 04:33:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:08.760 04:33:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.760 04:33:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.760 "name": "raid_bdev1", 00:17:08.760 "uuid": "990e7977-2889-4a2f-9ecb-12fabba8e031", 00:17:08.760 "strip_size_kb": 0, 00:17:08.760 "state": "online", 00:17:08.760 "raid_level": "raid1", 00:17:08.760 "superblock": true, 00:17:08.760 "num_base_bdevs": 2, 00:17:08.760 "num_base_bdevs_discovered": 1, 00:17:08.760 "num_base_bdevs_operational": 1, 00:17:08.760 "base_bdevs_list": [ 00:17:08.760 { 00:17:08.760 "name": null, 00:17:08.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.760 "is_configured": false, 00:17:08.760 "data_offset": 0, 00:17:08.760 "data_size": 7936 00:17:08.760 }, 00:17:08.760 { 00:17:08.760 "name": "BaseBdev2", 00:17:08.760 "uuid": "6b1d7b73-7857-56b8-9f59-bbf038c48d41", 00:17:08.760 "is_configured": true, 00:17:08.760 "data_offset": 256, 00:17:08.760 "data_size": 7936 00:17:08.760 } 00:17:08.760 ] 00:17:08.760 }' 00:17:08.760 04:33:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.760 04:33:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:09.020 04:33:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:09.020 04:33:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.020 04:33:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:09.020 [2024-12-13 04:33:08.978916] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:09.020 [2024-12-13 04:33:08.979025] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:09.020 [2024-12-13 04:33:08.979074] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:09.020 [2024-12-13 04:33:08.979114] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:09.020 [2024-12-13 04:33:08.979366] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:09.020 [2024-12-13 04:33:08.979412] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:09.020 [2024-12-13 04:33:08.979504] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:09.020 [2024-12-13 04:33:08.979543] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:09.020 [2024-12-13 04:33:08.979589] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:09.020 [2024-12-13 04:33:08.979709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:09.020 [2024-12-13 04:33:08.983705] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:17:09.020 spare 00:17:09.020 04:33:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.020 04:33:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:09.020 [2024-12-13 04:33:08.985918] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:10.402 04:33:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:10.402 04:33:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.402 04:33:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:10.402 04:33:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:10.402 04:33:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.402 04:33:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.402 04:33:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.402 04:33:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.402 04:33:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:10.402 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.402 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.402 "name": "raid_bdev1", 00:17:10.402 "uuid": "990e7977-2889-4a2f-9ecb-12fabba8e031", 00:17:10.402 "strip_size_kb": 0, 00:17:10.402 "state": "online", 00:17:10.402 "raid_level": "raid1", 00:17:10.402 "superblock": true, 00:17:10.402 "num_base_bdevs": 2, 00:17:10.402 "num_base_bdevs_discovered": 2, 00:17:10.402 "num_base_bdevs_operational": 2, 00:17:10.402 "process": { 00:17:10.402 "type": "rebuild", 00:17:10.402 "target": "spare", 00:17:10.402 "progress": { 00:17:10.402 "blocks": 2560, 00:17:10.402 "percent": 32 00:17:10.402 } 00:17:10.402 }, 00:17:10.402 "base_bdevs_list": [ 00:17:10.402 { 00:17:10.402 "name": "spare", 00:17:10.402 "uuid": "6ba7fd70-b06f-520f-bab3-044f5ea051b9", 00:17:10.402 "is_configured": true, 00:17:10.402 "data_offset": 256, 00:17:10.402 "data_size": 7936 00:17:10.402 }, 00:17:10.402 { 00:17:10.402 "name": "BaseBdev2", 00:17:10.402 "uuid": "6b1d7b73-7857-56b8-9f59-bbf038c48d41", 00:17:10.402 "is_configured": true, 00:17:10.402 "data_offset": 256, 00:17:10.402 "data_size": 7936 00:17:10.402 } 00:17:10.402 ] 00:17:10.402 }' 00:17:10.402 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.402 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:10.402 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.402 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:10.402 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:10.402 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.402 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:10.402 [2024-12-13 04:33:10.147255] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:10.402 [2024-12-13 04:33:10.193373] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:10.402 [2024-12-13 04:33:10.193526] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:10.402 [2024-12-13 04:33:10.193543] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:10.402 [2024-12-13 04:33:10.193554] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:10.402 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.402 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:10.402 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:10.402 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:10.402 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:10.402 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:10.402 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:10.402 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.402 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.402 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.402 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.402 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.402 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.402 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.402 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:10.402 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.402 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:10.402 "name": "raid_bdev1", 00:17:10.402 "uuid": "990e7977-2889-4a2f-9ecb-12fabba8e031", 00:17:10.402 "strip_size_kb": 0, 00:17:10.402 "state": "online", 00:17:10.402 "raid_level": "raid1", 00:17:10.402 "superblock": true, 00:17:10.402 "num_base_bdevs": 2, 00:17:10.402 "num_base_bdevs_discovered": 1, 00:17:10.402 "num_base_bdevs_operational": 1, 00:17:10.402 "base_bdevs_list": [ 00:17:10.402 { 00:17:10.402 "name": null, 00:17:10.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.402 "is_configured": false, 00:17:10.402 "data_offset": 0, 00:17:10.402 "data_size": 7936 00:17:10.402 }, 00:17:10.402 { 00:17:10.402 "name": "BaseBdev2", 00:17:10.402 "uuid": "6b1d7b73-7857-56b8-9f59-bbf038c48d41", 00:17:10.402 "is_configured": true, 00:17:10.402 "data_offset": 256, 00:17:10.402 "data_size": 7936 00:17:10.402 } 00:17:10.402 ] 00:17:10.402 }' 00:17:10.402 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:10.402 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:10.662 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:10.662 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.662 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:10.662 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:10.662 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.662 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.662 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.662 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.662 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:10.662 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.922 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.922 "name": "raid_bdev1", 00:17:10.922 "uuid": "990e7977-2889-4a2f-9ecb-12fabba8e031", 00:17:10.922 "strip_size_kb": 0, 00:17:10.922 "state": "online", 00:17:10.922 "raid_level": "raid1", 00:17:10.922 "superblock": true, 00:17:10.922 "num_base_bdevs": 2, 00:17:10.922 "num_base_bdevs_discovered": 1, 00:17:10.922 "num_base_bdevs_operational": 1, 00:17:10.922 "base_bdevs_list": [ 00:17:10.922 { 00:17:10.922 "name": null, 00:17:10.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.922 "is_configured": false, 00:17:10.922 "data_offset": 0, 00:17:10.922 "data_size": 7936 00:17:10.922 }, 00:17:10.922 { 00:17:10.922 "name": "BaseBdev2", 00:17:10.922 "uuid": "6b1d7b73-7857-56b8-9f59-bbf038c48d41", 00:17:10.922 "is_configured": true, 00:17:10.922 "data_offset": 256, 00:17:10.922 "data_size": 7936 00:17:10.922 } 00:17:10.922 ] 00:17:10.922 }' 00:17:10.922 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.922 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:10.922 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.922 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:10.922 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:10.922 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.922 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:10.922 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.922 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:10.922 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.922 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:10.922 [2024-12-13 04:33:10.782407] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:10.922 [2024-12-13 04:33:10.782473] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:10.922 [2024-12-13 04:33:10.782510] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:10.922 [2024-12-13 04:33:10.782522] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:10.922 [2024-12-13 04:33:10.782716] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:10.922 [2024-12-13 04:33:10.782737] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:10.922 [2024-12-13 04:33:10.782804] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:10.922 [2024-12-13 04:33:10.782822] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:10.922 [2024-12-13 04:33:10.782838] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:10.922 [2024-12-13 04:33:10.782854] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:10.922 BaseBdev1 00:17:10.922 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.922 04:33:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:11.862 04:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:11.862 04:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:11.862 04:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:11.862 04:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:11.862 04:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:11.862 04:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:11.862 04:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.862 04:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.862 04:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.862 04:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.862 04:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.862 04:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.862 04:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.862 04:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:11.862 04:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.862 04:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.862 "name": "raid_bdev1", 00:17:11.862 "uuid": "990e7977-2889-4a2f-9ecb-12fabba8e031", 00:17:11.862 "strip_size_kb": 0, 00:17:11.862 "state": "online", 00:17:11.862 "raid_level": "raid1", 00:17:11.862 "superblock": true, 00:17:11.862 "num_base_bdevs": 2, 00:17:11.862 "num_base_bdevs_discovered": 1, 00:17:11.862 "num_base_bdevs_operational": 1, 00:17:11.862 "base_bdevs_list": [ 00:17:11.862 { 00:17:11.862 "name": null, 00:17:11.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.862 "is_configured": false, 00:17:11.862 "data_offset": 0, 00:17:11.862 "data_size": 7936 00:17:11.862 }, 00:17:11.862 { 00:17:11.862 "name": "BaseBdev2", 00:17:11.862 "uuid": "6b1d7b73-7857-56b8-9f59-bbf038c48d41", 00:17:11.862 "is_configured": true, 00:17:11.862 "data_offset": 256, 00:17:11.862 "data_size": 7936 00:17:11.862 } 00:17:11.862 ] 00:17:11.862 }' 00:17:11.862 04:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.862 04:33:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:12.432 04:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:12.432 04:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.432 04:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:12.432 04:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:12.432 04:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.432 04:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.432 04:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.432 04:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.432 04:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:12.432 04:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.432 04:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.432 "name": "raid_bdev1", 00:17:12.432 "uuid": "990e7977-2889-4a2f-9ecb-12fabba8e031", 00:17:12.432 "strip_size_kb": 0, 00:17:12.432 "state": "online", 00:17:12.432 "raid_level": "raid1", 00:17:12.432 "superblock": true, 00:17:12.432 "num_base_bdevs": 2, 00:17:12.432 "num_base_bdevs_discovered": 1, 00:17:12.432 "num_base_bdevs_operational": 1, 00:17:12.432 "base_bdevs_list": [ 00:17:12.432 { 00:17:12.432 "name": null, 00:17:12.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.432 "is_configured": false, 00:17:12.432 "data_offset": 0, 00:17:12.432 "data_size": 7936 00:17:12.432 }, 00:17:12.432 { 00:17:12.432 "name": "BaseBdev2", 00:17:12.432 "uuid": "6b1d7b73-7857-56b8-9f59-bbf038c48d41", 00:17:12.432 "is_configured": true, 00:17:12.432 "data_offset": 256, 00:17:12.432 "data_size": 7936 00:17:12.432 } 00:17:12.432 ] 00:17:12.432 }' 00:17:12.432 04:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.432 04:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:12.432 04:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.432 04:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:12.433 04:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:12.433 04:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:17:12.433 04:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:12.433 04:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:12.433 04:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:12.433 04:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:12.433 04:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:12.433 04:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:12.433 04:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.433 04:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:12.433 [2024-12-13 04:33:12.371784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:12.433 [2024-12-13 04:33:12.371913] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:12.433 [2024-12-13 04:33:12.371926] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:12.433 request: 00:17:12.433 { 00:17:12.433 "base_bdev": "BaseBdev1", 00:17:12.433 "raid_bdev": "raid_bdev1", 00:17:12.433 "method": "bdev_raid_add_base_bdev", 00:17:12.433 "req_id": 1 00:17:12.433 } 00:17:12.433 Got JSON-RPC error response 00:17:12.433 response: 00:17:12.433 { 00:17:12.433 "code": -22, 00:17:12.433 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:12.433 } 00:17:12.433 04:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:12.433 04:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:17:12.433 04:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:12.433 04:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:12.433 04:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:12.433 04:33:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:13.373 04:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:13.373 04:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.373 04:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:13.373 04:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:13.373 04:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:13.373 04:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:13.373 04:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.633 04:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.633 04:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.633 04:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.633 04:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.633 04:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.633 04:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.633 04:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:13.633 04:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.633 04:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.633 "name": "raid_bdev1", 00:17:13.633 "uuid": "990e7977-2889-4a2f-9ecb-12fabba8e031", 00:17:13.633 "strip_size_kb": 0, 00:17:13.633 "state": "online", 00:17:13.633 "raid_level": "raid1", 00:17:13.633 "superblock": true, 00:17:13.633 "num_base_bdevs": 2, 00:17:13.633 "num_base_bdevs_discovered": 1, 00:17:13.633 "num_base_bdevs_operational": 1, 00:17:13.633 "base_bdevs_list": [ 00:17:13.633 { 00:17:13.633 "name": null, 00:17:13.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.633 "is_configured": false, 00:17:13.633 "data_offset": 0, 00:17:13.633 "data_size": 7936 00:17:13.633 }, 00:17:13.633 { 00:17:13.633 "name": "BaseBdev2", 00:17:13.633 "uuid": "6b1d7b73-7857-56b8-9f59-bbf038c48d41", 00:17:13.633 "is_configured": true, 00:17:13.633 "data_offset": 256, 00:17:13.633 "data_size": 7936 00:17:13.633 } 00:17:13.633 ] 00:17:13.633 }' 00:17:13.633 04:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.633 04:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:13.893 04:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:13.893 04:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:13.893 04:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:13.893 04:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:13.893 04:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:13.893 04:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.893 04:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.893 04:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:13.893 04:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.893 04:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.893 04:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:13.893 "name": "raid_bdev1", 00:17:13.893 "uuid": "990e7977-2889-4a2f-9ecb-12fabba8e031", 00:17:13.893 "strip_size_kb": 0, 00:17:13.893 "state": "online", 00:17:13.893 "raid_level": "raid1", 00:17:13.893 "superblock": true, 00:17:13.893 "num_base_bdevs": 2, 00:17:13.893 "num_base_bdevs_discovered": 1, 00:17:13.893 "num_base_bdevs_operational": 1, 00:17:13.893 "base_bdevs_list": [ 00:17:13.893 { 00:17:13.893 "name": null, 00:17:13.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.893 "is_configured": false, 00:17:13.893 "data_offset": 0, 00:17:13.893 "data_size": 7936 00:17:13.893 }, 00:17:13.893 { 00:17:13.893 "name": "BaseBdev2", 00:17:13.893 "uuid": "6b1d7b73-7857-56b8-9f59-bbf038c48d41", 00:17:13.893 "is_configured": true, 00:17:13.893 "data_offset": 256, 00:17:13.893 "data_size": 7936 00:17:13.893 } 00:17:13.893 ] 00:17:13.893 }' 00:17:13.893 04:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:14.153 04:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:14.153 04:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:14.153 04:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:14.153 04:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 101119 00:17:14.153 04:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 101119 ']' 00:17:14.153 04:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 101119 00:17:14.153 04:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:17:14.153 04:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:14.153 04:33:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101119 00:17:14.153 killing process with pid 101119 00:17:14.153 Received shutdown signal, test time was about 60.000000 seconds 00:17:14.153 00:17:14.153 Latency(us) 00:17:14.153 [2024-12-13T04:33:14.168Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:14.153 [2024-12-13T04:33:14.168Z] =================================================================================================================== 00:17:14.153 [2024-12-13T04:33:14.168Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:14.153 04:33:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:14.153 04:33:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:14.153 04:33:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101119' 00:17:14.153 04:33:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 101119 00:17:14.153 [2024-12-13 04:33:14.004638] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:14.153 [2024-12-13 04:33:14.004732] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:14.153 [2024-12-13 04:33:14.004775] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:14.153 [2024-12-13 04:33:14.004784] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:17:14.153 04:33:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 101119 00:17:14.153 [2024-12-13 04:33:14.065792] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:14.413 04:33:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:17:14.413 00:17:14.413 real 0m16.249s 00:17:14.413 user 0m21.593s 00:17:14.413 sys 0m1.675s 00:17:14.413 04:33:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:14.413 04:33:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:14.413 ************************************ 00:17:14.413 END TEST raid_rebuild_test_sb_md_interleaved 00:17:14.413 ************************************ 00:17:14.673 04:33:14 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:17:14.673 04:33:14 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:17:14.673 04:33:14 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 101119 ']' 00:17:14.673 04:33:14 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 101119 00:17:14.673 04:33:14 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:17:14.673 ************************************ 00:17:14.673 END TEST bdev_raid 00:17:14.673 ************************************ 00:17:14.673 00:17:14.673 real 10m13.364s 00:17:14.673 user 14m18.496s 00:17:14.673 sys 1m58.333s 00:17:14.673 04:33:14 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:14.673 04:33:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:14.673 04:33:14 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:17:14.673 04:33:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:14.673 04:33:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:14.673 04:33:14 -- common/autotest_common.sh@10 -- # set +x 00:17:14.673 ************************************ 00:17:14.673 START TEST spdkcli_raid 00:17:14.673 ************************************ 00:17:14.673 04:33:14 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:17:14.673 * Looking for test storage... 00:17:14.673 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:14.673 04:33:14 spdkcli_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:14.673 04:33:14 spdkcli_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:17:14.673 04:33:14 spdkcli_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:14.934 04:33:14 spdkcli_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:14.934 04:33:14 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:14.934 04:33:14 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:14.934 04:33:14 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:14.934 04:33:14 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:17:14.934 04:33:14 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:17:14.934 04:33:14 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:17:14.934 04:33:14 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:17:14.934 04:33:14 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:17:14.934 04:33:14 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:17:14.934 04:33:14 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:17:14.934 04:33:14 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:14.934 04:33:14 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:17:14.934 04:33:14 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:17:14.934 04:33:14 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:14.934 04:33:14 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:14.934 04:33:14 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:17:14.934 04:33:14 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:17:14.934 04:33:14 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:14.934 04:33:14 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:17:14.934 04:33:14 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:14.934 04:33:14 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:17:14.934 04:33:14 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:17:14.934 04:33:14 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:14.934 04:33:14 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:17:14.934 04:33:14 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:14.934 04:33:14 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:14.934 04:33:14 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:14.934 04:33:14 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:17:14.934 04:33:14 spdkcli_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:14.934 04:33:14 spdkcli_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:14.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.934 --rc genhtml_branch_coverage=1 00:17:14.934 --rc genhtml_function_coverage=1 00:17:14.934 --rc genhtml_legend=1 00:17:14.934 --rc geninfo_all_blocks=1 00:17:14.934 --rc geninfo_unexecuted_blocks=1 00:17:14.934 00:17:14.934 ' 00:17:14.934 04:33:14 spdkcli_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:14.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.934 --rc genhtml_branch_coverage=1 00:17:14.934 --rc genhtml_function_coverage=1 00:17:14.934 --rc genhtml_legend=1 00:17:14.934 --rc geninfo_all_blocks=1 00:17:14.934 --rc geninfo_unexecuted_blocks=1 00:17:14.934 00:17:14.934 ' 00:17:14.934 04:33:14 spdkcli_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:14.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.934 --rc genhtml_branch_coverage=1 00:17:14.934 --rc genhtml_function_coverage=1 00:17:14.934 --rc genhtml_legend=1 00:17:14.934 --rc geninfo_all_blocks=1 00:17:14.934 --rc geninfo_unexecuted_blocks=1 00:17:14.934 00:17:14.934 ' 00:17:14.934 04:33:14 spdkcli_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:14.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.934 --rc genhtml_branch_coverage=1 00:17:14.934 --rc genhtml_function_coverage=1 00:17:14.934 --rc genhtml_legend=1 00:17:14.934 --rc geninfo_all_blocks=1 00:17:14.934 --rc geninfo_unexecuted_blocks=1 00:17:14.934 00:17:14.934 ' 00:17:14.934 04:33:14 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:17:14.934 04:33:14 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:17:14.934 04:33:14 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:17:14.934 04:33:14 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:17:14.934 04:33:14 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:17:14.934 04:33:14 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:17:14.934 04:33:14 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:17:14.934 04:33:14 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:17:14.934 04:33:14 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:17:14.934 04:33:14 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:17:14.934 04:33:14 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:17:14.934 04:33:14 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:17:14.934 04:33:14 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:17:14.934 04:33:14 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:17:14.934 04:33:14 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:17:14.934 04:33:14 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:17:14.934 04:33:14 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:17:14.934 04:33:14 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:17:14.934 04:33:14 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:17:14.934 04:33:14 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:17:14.934 04:33:14 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:17:14.934 04:33:14 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:17:14.934 04:33:14 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:17:14.934 04:33:14 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:17:14.934 04:33:14 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:17:14.934 04:33:14 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:17:14.934 04:33:14 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:14.934 04:33:14 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:14.934 04:33:14 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:17:14.934 04:33:14 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:17:14.935 04:33:14 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:17:14.935 04:33:14 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:17:14.935 04:33:14 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:17:14.935 04:33:14 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:14.935 04:33:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:14.935 04:33:14 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:17:14.935 04:33:14 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=101785 00:17:14.935 04:33:14 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:17:14.935 04:33:14 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 101785 00:17:14.935 04:33:14 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 101785 ']' 00:17:14.935 04:33:14 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.935 04:33:14 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:14.935 04:33:14 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.935 04:33:14 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:14.935 04:33:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:14.935 [2024-12-13 04:33:14.904718] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:14.935 [2024-12-13 04:33:14.904915] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101785 ] 00:17:15.199 [2024-12-13 04:33:15.059702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:15.199 [2024-12-13 04:33:15.101281] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.199 [2024-12-13 04:33:15.101391] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:15.769 04:33:15 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:15.769 04:33:15 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:17:15.769 04:33:15 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:17:15.769 04:33:15 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:15.769 04:33:15 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:15.769 04:33:15 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:17:15.769 04:33:15 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:15.769 04:33:15 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:15.769 04:33:15 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:17:15.769 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:17:15.769 ' 00:17:17.685 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:17:17.685 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:17:17.685 04:33:17 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:17:17.685 04:33:17 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:17.685 04:33:17 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:17.685 04:33:17 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:17:17.685 04:33:17 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:17.685 04:33:17 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:17.685 04:33:17 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:17:17.685 ' 00:17:18.653 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:17:18.653 04:33:18 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:17:18.653 04:33:18 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:18.653 04:33:18 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:18.653 04:33:18 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:17:18.653 04:33:18 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:18.653 04:33:18 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:18.653 04:33:18 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:17:18.653 04:33:18 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:17:19.221 04:33:19 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:17:19.221 04:33:19 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:17:19.221 04:33:19 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:17:19.221 04:33:19 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:19.221 04:33:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:19.221 04:33:19 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:17:19.221 04:33:19 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:19.221 04:33:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:19.221 04:33:19 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:17:19.221 ' 00:17:20.159 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:17:20.418 04:33:20 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:17:20.418 04:33:20 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:20.418 04:33:20 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:20.418 04:33:20 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:17:20.418 04:33:20 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:20.418 04:33:20 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:20.418 04:33:20 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:17:20.418 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:17:20.418 ' 00:17:21.799 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:17:21.799 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:17:21.799 04:33:21 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:17:21.799 04:33:21 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:21.799 04:33:21 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:22.059 04:33:21 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 101785 00:17:22.059 04:33:21 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 101785 ']' 00:17:22.059 04:33:21 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 101785 00:17:22.059 04:33:21 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:17:22.059 04:33:21 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:22.059 04:33:21 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101785 00:17:22.059 killing process with pid 101785 00:17:22.059 04:33:21 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:22.059 04:33:21 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:22.059 04:33:21 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101785' 00:17:22.059 04:33:21 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 101785 00:17:22.059 04:33:21 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 101785 00:17:22.628 04:33:22 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:17:22.628 04:33:22 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 101785 ']' 00:17:22.628 04:33:22 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 101785 00:17:22.628 04:33:22 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 101785 ']' 00:17:22.628 04:33:22 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 101785 00:17:22.628 Process with pid 101785 is not found 00:17:22.628 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (101785) - No such process 00:17:22.628 04:33:22 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 101785 is not found' 00:17:22.628 04:33:22 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:17:22.628 04:33:22 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:17:22.628 04:33:22 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:17:22.628 04:33:22 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:17:22.628 00:17:22.628 real 0m7.925s 00:17:22.628 user 0m16.536s 00:17:22.628 sys 0m1.223s 00:17:22.628 04:33:22 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:22.628 04:33:22 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:22.628 ************************************ 00:17:22.628 END TEST spdkcli_raid 00:17:22.628 ************************************ 00:17:22.628 04:33:22 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:17:22.628 04:33:22 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:22.628 04:33:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:22.628 04:33:22 -- common/autotest_common.sh@10 -- # set +x 00:17:22.628 ************************************ 00:17:22.628 START TEST blockdev_raid5f 00:17:22.628 ************************************ 00:17:22.628 04:33:22 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:17:22.887 * Looking for test storage... 00:17:22.887 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:17:22.887 04:33:22 blockdev_raid5f -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:22.887 04:33:22 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lcov --version 00:17:22.887 04:33:22 blockdev_raid5f -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:22.887 04:33:22 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:22.887 04:33:22 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:22.887 04:33:22 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:22.887 04:33:22 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:22.887 04:33:22 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:17:22.887 04:33:22 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:17:22.887 04:33:22 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:17:22.887 04:33:22 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:17:22.887 04:33:22 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:17:22.887 04:33:22 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:17:22.887 04:33:22 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:17:22.887 04:33:22 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:22.887 04:33:22 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:17:22.887 04:33:22 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:17:22.887 04:33:22 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:22.887 04:33:22 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:22.887 04:33:22 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:17:22.887 04:33:22 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:17:22.887 04:33:22 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:22.887 04:33:22 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:17:22.888 04:33:22 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:17:22.888 04:33:22 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:17:22.888 04:33:22 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:17:22.888 04:33:22 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:22.888 04:33:22 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:17:22.888 04:33:22 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:17:22.888 04:33:22 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:22.888 04:33:22 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:22.888 04:33:22 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:17:22.888 04:33:22 blockdev_raid5f -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:22.888 04:33:22 blockdev_raid5f -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:22.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.888 --rc genhtml_branch_coverage=1 00:17:22.888 --rc genhtml_function_coverage=1 00:17:22.888 --rc genhtml_legend=1 00:17:22.888 --rc geninfo_all_blocks=1 00:17:22.888 --rc geninfo_unexecuted_blocks=1 00:17:22.888 00:17:22.888 ' 00:17:22.888 04:33:22 blockdev_raid5f -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:22.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.888 --rc genhtml_branch_coverage=1 00:17:22.888 --rc genhtml_function_coverage=1 00:17:22.888 --rc genhtml_legend=1 00:17:22.888 --rc geninfo_all_blocks=1 00:17:22.888 --rc geninfo_unexecuted_blocks=1 00:17:22.888 00:17:22.888 ' 00:17:22.888 04:33:22 blockdev_raid5f -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:22.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.888 --rc genhtml_branch_coverage=1 00:17:22.888 --rc genhtml_function_coverage=1 00:17:22.888 --rc genhtml_legend=1 00:17:22.888 --rc geninfo_all_blocks=1 00:17:22.888 --rc geninfo_unexecuted_blocks=1 00:17:22.888 00:17:22.888 ' 00:17:22.888 04:33:22 blockdev_raid5f -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:22.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.888 --rc genhtml_branch_coverage=1 00:17:22.888 --rc genhtml_function_coverage=1 00:17:22.888 --rc genhtml_legend=1 00:17:22.888 --rc geninfo_all_blocks=1 00:17:22.888 --rc geninfo_unexecuted_blocks=1 00:17:22.888 00:17:22.888 ' 00:17:22.888 04:33:22 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:17:22.888 04:33:22 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:17:22.888 04:33:22 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:17:22.888 04:33:22 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:22.888 04:33:22 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:17:22.888 04:33:22 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:17:22.888 04:33:22 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:17:22.888 04:33:22 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:17:22.888 04:33:22 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:17:22.888 04:33:22 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:17:22.888 04:33:22 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:17:22.888 04:33:22 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:17:22.888 04:33:22 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:17:22.888 04:33:22 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:17:22.888 04:33:22 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:17:22.888 04:33:22 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:17:22.888 04:33:22 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:17:22.888 04:33:22 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:17:22.888 04:33:22 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:17:22.888 04:33:22 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:17:22.888 04:33:22 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:17:22.888 04:33:22 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:17:22.888 04:33:22 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:17:22.888 04:33:22 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:17:22.888 04:33:22 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=102043 00:17:22.888 04:33:22 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:17:22.888 04:33:22 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:17:22.888 04:33:22 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 102043 00:17:22.888 04:33:22 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 102043 ']' 00:17:22.888 04:33:22 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.888 04:33:22 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:22.888 04:33:22 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.888 04:33:22 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:22.888 04:33:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:23.148 [2024-12-13 04:33:22.927331] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:23.148 [2024-12-13 04:33:22.927594] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102043 ] 00:17:23.149 [2024-12-13 04:33:23.085745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.149 [2024-12-13 04:33:23.125460] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.090 04:33:23 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:24.090 04:33:23 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:17:24.090 04:33:23 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:17:24.090 04:33:23 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:17:24.090 04:33:23 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:17:24.090 04:33:23 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.090 04:33:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:24.090 Malloc0 00:17:24.090 Malloc1 00:17:24.090 Malloc2 00:17:24.090 04:33:23 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.090 04:33:23 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:17:24.090 04:33:23 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.090 04:33:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:24.090 04:33:23 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.090 04:33:23 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:17:24.090 04:33:23 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:17:24.090 04:33:23 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.090 04:33:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:24.090 04:33:23 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.090 04:33:23 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:17:24.090 04:33:23 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.090 04:33:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:24.090 04:33:23 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.090 04:33:23 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:17:24.090 04:33:23 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.090 04:33:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:24.090 04:33:23 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.090 04:33:23 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:17:24.090 04:33:23 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:17:24.090 04:33:23 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:17:24.090 04:33:23 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.090 04:33:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:24.090 04:33:23 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.090 04:33:23 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:17:24.090 04:33:23 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:17:24.090 04:33:23 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "969d5ea6-a968-409d-a241-f650436cccdb"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "969d5ea6-a968-409d-a241-f650436cccdb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "969d5ea6-a968-409d-a241-f650436cccdb",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "63a2ef31-1386-4ce0-afc7-104c131b9596",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "7acf5034-4720-4a00-a908-34b98fdbf107",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "63d7ca8f-b34d-4440-820c-6e963a6e5031",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:17:24.090 04:33:23 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:17:24.090 04:33:23 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:17:24.090 04:33:23 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:17:24.090 04:33:23 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 102043 00:17:24.090 04:33:23 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 102043 ']' 00:17:24.090 04:33:23 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 102043 00:17:24.090 04:33:23 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:17:24.090 04:33:23 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:24.090 04:33:23 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102043 00:17:24.090 killing process with pid 102043 00:17:24.090 04:33:24 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:24.090 04:33:24 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:24.090 04:33:24 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102043' 00:17:24.090 04:33:24 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 102043 00:17:24.090 04:33:24 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 102043 00:17:25.031 04:33:24 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:25.031 04:33:24 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:17:25.032 04:33:24 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:25.032 04:33:24 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:25.032 04:33:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:25.032 ************************************ 00:17:25.032 START TEST bdev_hello_world 00:17:25.032 ************************************ 00:17:25.032 04:33:24 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:17:25.032 [2024-12-13 04:33:24.789858] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:25.032 [2024-12-13 04:33:24.790026] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102087 ] 00:17:25.032 [2024-12-13 04:33:24.944711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.032 [2024-12-13 04:33:24.987127] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.292 [2024-12-13 04:33:25.235363] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:17:25.292 [2024-12-13 04:33:25.235411] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:17:25.292 [2024-12-13 04:33:25.235428] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:17:25.292 [2024-12-13 04:33:25.235729] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:17:25.292 [2024-12-13 04:33:25.235892] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:17:25.292 [2024-12-13 04:33:25.235915] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:17:25.292 [2024-12-13 04:33:25.235967] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:17:25.292 00:17:25.292 [2024-12-13 04:33:25.235984] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:17:25.862 00:17:25.862 real 0m0.890s 00:17:25.862 user 0m0.504s 00:17:25.862 sys 0m0.279s 00:17:25.862 ************************************ 00:17:25.862 END TEST bdev_hello_world 00:17:25.862 ************************************ 00:17:25.862 04:33:25 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:25.862 04:33:25 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:17:25.862 04:33:25 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:17:25.862 04:33:25 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:25.862 04:33:25 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:25.862 04:33:25 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:25.862 ************************************ 00:17:25.862 START TEST bdev_bounds 00:17:25.862 ************************************ 00:17:25.862 04:33:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:17:25.862 04:33:25 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=102113 00:17:25.862 04:33:25 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:17:25.862 04:33:25 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:25.862 04:33:25 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 102113' 00:17:25.862 Process bdevio pid: 102113 00:17:25.862 04:33:25 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 102113 00:17:25.862 04:33:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 102113 ']' 00:17:25.862 04:33:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.862 04:33:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:25.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.862 04:33:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.862 04:33:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:25.862 04:33:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:25.862 [2024-12-13 04:33:25.758139] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:25.862 [2024-12-13 04:33:25.758322] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102113 ] 00:17:26.122 [2024-12-13 04:33:25.915047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:26.122 [2024-12-13 04:33:25.957336] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:26.122 [2024-12-13 04:33:25.957506] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.122 [2024-12-13 04:33:25.957587] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:26.693 04:33:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:26.693 04:33:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:17:26.693 04:33:26 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:17:26.693 I/O targets: 00:17:26.693 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:17:26.693 00:17:26.693 00:17:26.693 CUnit - A unit testing framework for C - Version 2.1-3 00:17:26.693 http://cunit.sourceforge.net/ 00:17:26.693 00:17:26.693 00:17:26.693 Suite: bdevio tests on: raid5f 00:17:26.693 Test: blockdev write read block ...passed 00:17:26.693 Test: blockdev write zeroes read block ...passed 00:17:26.693 Test: blockdev write zeroes read no split ...passed 00:17:26.953 Test: blockdev write zeroes read split ...passed 00:17:26.953 Test: blockdev write zeroes read split partial ...passed 00:17:26.953 Test: blockdev reset ...passed 00:17:26.953 Test: blockdev write read 8 blocks ...passed 00:17:26.953 Test: blockdev write read size > 128k ...passed 00:17:26.953 Test: blockdev write read invalid size ...passed 00:17:26.953 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:26.953 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:26.953 Test: blockdev write read max offset ...passed 00:17:26.953 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:26.953 Test: blockdev writev readv 8 blocks ...passed 00:17:26.953 Test: blockdev writev readv 30 x 1block ...passed 00:17:26.953 Test: blockdev writev readv block ...passed 00:17:26.953 Test: blockdev writev readv size > 128k ...passed 00:17:26.953 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:26.953 Test: blockdev comparev and writev ...passed 00:17:26.953 Test: blockdev nvme passthru rw ...passed 00:17:26.953 Test: blockdev nvme passthru vendor specific ...passed 00:17:26.953 Test: blockdev nvme admin passthru ...passed 00:17:26.953 Test: blockdev copy ...passed 00:17:26.953 00:17:26.953 Run Summary: Type Total Ran Passed Failed Inactive 00:17:26.953 suites 1 1 n/a 0 0 00:17:26.953 tests 23 23 23 0 0 00:17:26.953 asserts 130 130 130 0 n/a 00:17:26.953 00:17:26.953 Elapsed time = 0.312 seconds 00:17:26.953 0 00:17:26.953 04:33:26 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 102113 00:17:26.953 04:33:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 102113 ']' 00:17:26.953 04:33:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 102113 00:17:26.953 04:33:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:17:26.953 04:33:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:26.953 04:33:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102113 00:17:26.953 killing process with pid 102113 00:17:26.953 04:33:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:26.953 04:33:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:26.953 04:33:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102113' 00:17:26.953 04:33:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 102113 00:17:26.953 04:33:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 102113 00:17:27.524 04:33:27 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:17:27.524 00:17:27.524 real 0m1.568s 00:17:27.524 user 0m3.748s 00:17:27.524 sys 0m0.413s 00:17:27.524 04:33:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:27.524 ************************************ 00:17:27.524 END TEST bdev_bounds 00:17:27.524 ************************************ 00:17:27.524 04:33:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:27.524 04:33:27 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:17:27.524 04:33:27 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:27.524 04:33:27 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:27.524 04:33:27 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:27.524 ************************************ 00:17:27.524 START TEST bdev_nbd 00:17:27.524 ************************************ 00:17:27.524 04:33:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:17:27.524 04:33:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:17:27.524 04:33:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:17:27.524 04:33:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:27.524 04:33:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:27.524 04:33:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:17:27.524 04:33:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:17:27.524 04:33:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:17:27.524 04:33:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:17:27.525 04:33:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:17:27.525 04:33:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:17:27.525 04:33:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:17:27.525 04:33:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:17:27.525 04:33:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:17:27.525 04:33:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:17:27.525 04:33:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:17:27.525 04:33:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=102162 00:17:27.525 04:33:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:17:27.525 04:33:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:27.525 04:33:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 102162 /var/tmp/spdk-nbd.sock 00:17:27.525 04:33:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 102162 ']' 00:17:27.525 04:33:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:17:27.525 04:33:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:27.525 04:33:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:17:27.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:17:27.525 04:33:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:27.525 04:33:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:27.525 [2024-12-13 04:33:27.411278] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:27.525 [2024-12-13 04:33:27.411455] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:27.785 [2024-12-13 04:33:27.569313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.785 [2024-12-13 04:33:27.608844] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.356 04:33:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:28.356 04:33:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:17:28.356 04:33:28 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:17:28.356 04:33:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:28.356 04:33:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:17:28.356 04:33:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:17:28.356 04:33:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:17:28.356 04:33:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:28.356 04:33:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:17:28.356 04:33:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:17:28.356 04:33:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:17:28.356 04:33:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:17:28.356 04:33:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:17:28.356 04:33:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:17:28.356 04:33:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:17:28.616 04:33:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:17:28.616 04:33:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:17:28.616 04:33:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:17:28.616 04:33:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:28.616 04:33:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:28.616 04:33:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:28.616 04:33:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:28.616 04:33:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:28.616 04:33:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:28.616 04:33:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:28.616 04:33:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:28.616 04:33:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:28.616 1+0 records in 00:17:28.616 1+0 records out 00:17:28.616 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284582 s, 14.4 MB/s 00:17:28.616 04:33:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:28.616 04:33:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:28.616 04:33:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:28.616 04:33:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:28.616 04:33:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:28.616 04:33:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:28.616 04:33:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:17:28.616 04:33:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:28.876 04:33:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:17:28.877 { 00:17:28.877 "nbd_device": "/dev/nbd0", 00:17:28.877 "bdev_name": "raid5f" 00:17:28.877 } 00:17:28.877 ]' 00:17:28.877 04:33:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:17:28.877 04:33:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:17:28.877 { 00:17:28.877 "nbd_device": "/dev/nbd0", 00:17:28.877 "bdev_name": "raid5f" 00:17:28.877 } 00:17:28.877 ]' 00:17:28.877 04:33:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:17:28.877 04:33:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:28.877 04:33:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:28.877 04:33:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:28.877 04:33:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:28.877 04:33:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:28.877 04:33:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:28.877 04:33:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:29.137 04:33:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:29.137 04:33:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:29.137 04:33:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:29.137 04:33:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:29.137 04:33:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:29.137 04:33:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:29.137 04:33:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:29.137 04:33:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:29.137 04:33:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:29.137 04:33:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:29.137 04:33:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:29.137 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:29.137 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:29.137 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:29.398 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:29.398 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:29.398 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:29.398 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:29.398 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:29.398 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:29.398 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:17:29.398 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:17:29.398 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:17:29.398 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:17:29.398 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:29.398 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:17:29.398 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:17:29.398 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:17:29.398 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:17:29.398 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:17:29.398 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:29.398 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:17:29.398 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:29.398 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:29.398 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:29.398 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:17:29.398 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:29.398 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:29.398 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:17:29.398 /dev/nbd0 00:17:29.398 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:29.658 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:29.658 04:33:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:29.658 04:33:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:29.658 04:33:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:29.658 04:33:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:29.658 04:33:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:29.658 04:33:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:29.658 04:33:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:29.658 04:33:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:29.658 04:33:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:29.658 1+0 records in 00:17:29.658 1+0 records out 00:17:29.658 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000562795 s, 7.3 MB/s 00:17:29.658 04:33:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:29.658 04:33:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:29.659 04:33:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:29.659 04:33:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:29.659 04:33:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:29.659 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:29.659 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:29.659 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:29.659 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:29.659 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:29.659 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:17:29.659 { 00:17:29.659 "nbd_device": "/dev/nbd0", 00:17:29.659 "bdev_name": "raid5f" 00:17:29.659 } 00:17:29.659 ]' 00:17:29.659 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:17:29.659 { 00:17:29.659 "nbd_device": "/dev/nbd0", 00:17:29.659 "bdev_name": "raid5f" 00:17:29.659 } 00:17:29.659 ]' 00:17:29.659 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:29.919 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:17:29.919 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:17:29.919 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:29.919 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:17:29.919 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:17:29.919 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:17:29.919 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:17:29.919 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:17:29.919 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:17:29.919 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:29.919 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:17:29.919 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:29.919 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:17:29.919 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:17:29.919 256+0 records in 00:17:29.919 256+0 records out 00:17:29.919 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134194 s, 78.1 MB/s 00:17:29.919 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:29.919 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:17:29.919 256+0 records in 00:17:29.919 256+0 records out 00:17:29.919 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0302135 s, 34.7 MB/s 00:17:29.919 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:17:29.919 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:17:29.919 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:29.919 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:17:29.919 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:29.919 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:17:29.919 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:17:29.919 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:29.919 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:17:29.919 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:29.919 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:29.919 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:29.919 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:29.919 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:29.919 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:29.919 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:29.919 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:30.179 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:30.179 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:30.179 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:30.179 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:30.179 04:33:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:30.179 04:33:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:30.179 04:33:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:30.179 04:33:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:30.179 04:33:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:30.179 04:33:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:30.179 04:33:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:30.440 04:33:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:30.440 04:33:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:30.440 04:33:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:30.440 04:33:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:30.440 04:33:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:30.440 04:33:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:30.440 04:33:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:30.440 04:33:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:30.440 04:33:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:30.440 04:33:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:17:30.440 04:33:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:17:30.440 04:33:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:17:30.440 04:33:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:30.440 04:33:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:30.440 04:33:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:17:30.440 04:33:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:17:30.440 malloc_lvol_verify 00:17:30.701 04:33:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:17:30.701 ff230789-b7a3-4020-b965-6586d4926ecb 00:17:30.701 04:33:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:17:30.962 70e47fd7-3cef-42e2-846a-7649520586ff 00:17:30.962 04:33:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:17:31.221 /dev/nbd0 00:17:31.221 04:33:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:17:31.221 04:33:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:17:31.221 04:33:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:17:31.221 04:33:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:17:31.221 04:33:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:17:31.221 mke2fs 1.47.0 (5-Feb-2023) 00:17:31.221 Discarding device blocks: 0/4096 done 00:17:31.221 Creating filesystem with 4096 1k blocks and 1024 inodes 00:17:31.221 00:17:31.221 Allocating group tables: 0/1 done 00:17:31.221 Writing inode tables: 0/1 done 00:17:31.221 Creating journal (1024 blocks): done 00:17:31.221 Writing superblocks and filesystem accounting information: 0/1 done 00:17:31.221 00:17:31.221 04:33:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:31.221 04:33:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:31.221 04:33:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:31.221 04:33:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:31.222 04:33:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:31.222 04:33:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:31.222 04:33:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:31.482 04:33:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:31.482 04:33:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:31.482 04:33:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:31.482 04:33:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:31.482 04:33:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:31.482 04:33:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:31.482 04:33:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:31.482 04:33:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:31.482 04:33:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 102162 00:17:31.482 04:33:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 102162 ']' 00:17:31.482 04:33:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 102162 00:17:31.482 04:33:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:17:31.482 04:33:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:31.482 04:33:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102162 00:17:31.482 killing process with pid 102162 00:17:31.482 04:33:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:31.482 04:33:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:31.482 04:33:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102162' 00:17:31.482 04:33:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 102162 00:17:31.482 04:33:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 102162 00:17:32.053 04:33:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:17:32.053 00:17:32.053 real 0m4.464s 00:17:32.053 user 0m6.367s 00:17:32.053 sys 0m1.325s 00:17:32.053 04:33:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:32.053 ************************************ 00:17:32.053 END TEST bdev_nbd 00:17:32.053 ************************************ 00:17:32.053 04:33:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:32.053 04:33:31 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:17:32.053 04:33:31 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:17:32.053 04:33:31 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:17:32.053 04:33:31 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:17:32.053 04:33:31 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:32.053 04:33:31 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:32.053 04:33:31 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:32.053 ************************************ 00:17:32.053 START TEST bdev_fio 00:17:32.053 ************************************ 00:17:32.053 04:33:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:17:32.053 04:33:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:17:32.053 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:17:32.053 04:33:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:17:32.053 04:33:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:17:32.053 04:33:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:17:32.053 04:33:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:17:32.053 04:33:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:17:32.053 04:33:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:17:32.053 04:33:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:32.053 04:33:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:17:32.053 04:33:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:17:32.053 04:33:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:17:32.053 04:33:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:17:32.053 04:33:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:32.053 04:33:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:17:32.053 04:33:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:17:32.053 04:33:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:32.053 04:33:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:17:32.053 04:33:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:17:32.053 04:33:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:17:32.053 04:33:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:17:32.053 04:33:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:17:32.053 04:33:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:17:32.053 04:33:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:17:32.053 04:33:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:32.053 04:33:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:17:32.053 04:33:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:17:32.053 04:33:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:17:32.053 04:33:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:32.053 04:33:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:17:32.053 04:33:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:32.053 04:33:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:32.053 ************************************ 00:17:32.053 START TEST bdev_fio_rw_verify 00:17:32.053 ************************************ 00:17:32.054 04:33:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:32.054 04:33:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:32.054 04:33:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:32.054 04:33:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:32.054 04:33:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:32.054 04:33:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:32.054 04:33:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:17:32.054 04:33:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:32.054 04:33:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:32.054 04:33:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:17:32.054 04:33:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:32.054 04:33:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:32.054 04:33:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:32.054 04:33:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:32.054 04:33:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:17:32.054 04:33:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:32.054 04:33:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:32.328 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:32.328 fio-3.35 00:17:32.328 Starting 1 thread 00:17:44.547 00:17:44.547 job_raid5f: (groupid=0, jobs=1): err= 0: pid=102349: Fri Dec 13 04:33:42 2024 00:17:44.547 read: IOPS=12.1k, BW=47.4MiB/s (49.7MB/s)(474MiB/10001msec) 00:17:44.547 slat (nsec): min=17770, max=91895, avg=19719.16, stdev=2176.02 00:17:44.547 clat (usec): min=11, max=301, avg=134.38, stdev=46.70 00:17:44.547 lat (usec): min=31, max=321, avg=154.10, stdev=46.96 00:17:44.547 clat percentiles (usec): 00:17:44.547 | 50.000th=[ 137], 99.000th=[ 223], 99.900th=[ 245], 99.990th=[ 269], 00:17:44.547 | 99.999th=[ 293] 00:17:44.547 write: IOPS=12.7k, BW=49.5MiB/s (51.9MB/s)(489MiB/9870msec); 0 zone resets 00:17:44.547 slat (usec): min=7, max=308, avg=16.44, stdev= 3.77 00:17:44.547 clat (usec): min=61, max=1828, avg=303.32, stdev=42.84 00:17:44.547 lat (usec): min=77, max=2042, avg=319.76, stdev=43.87 00:17:44.547 clat percentiles (usec): 00:17:44.547 | 50.000th=[ 306], 99.000th=[ 383], 99.900th=[ 619], 99.990th=[ 1090], 00:17:44.547 | 99.999th=[ 1745] 00:17:44.547 bw ( KiB/s): min=47272, max=54024, per=98.82%, avg=50128.42, stdev=1520.34, samples=19 00:17:44.547 iops : min=11818, max=13506, avg=12532.11, stdev=380.08, samples=19 00:17:44.547 lat (usec) : 20=0.01%, 50=0.01%, 100=14.21%, 250=40.31%, 500=45.41% 00:17:44.547 lat (usec) : 750=0.05%, 1000=0.02% 00:17:44.547 lat (msec) : 2=0.01% 00:17:44.547 cpu : usr=98.83%, sys=0.48%, ctx=19, majf=0, minf=13014 00:17:44.547 IO depths : 1=7.7%, 2=19.9%, 4=55.2%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:44.547 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:44.547 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:44.547 issued rwts: total=121355,125168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:44.547 latency : target=0, window=0, percentile=100.00%, depth=8 00:17:44.547 00:17:44.547 Run status group 0 (all jobs): 00:17:44.547 READ: bw=47.4MiB/s (49.7MB/s), 47.4MiB/s-47.4MiB/s (49.7MB/s-49.7MB/s), io=474MiB (497MB), run=10001-10001msec 00:17:44.547 WRITE: bw=49.5MiB/s (51.9MB/s), 49.5MiB/s-49.5MiB/s (51.9MB/s-51.9MB/s), io=489MiB (513MB), run=9870-9870msec 00:17:44.547 ----------------------------------------------------- 00:17:44.547 Suppressions used: 00:17:44.547 count bytes template 00:17:44.547 1 7 /usr/src/fio/parse.c 00:17:44.547 62 5952 /usr/src/fio/iolog.c 00:17:44.547 1 8 libtcmalloc_minimal.so 00:17:44.548 1 904 libcrypto.so 00:17:44.548 ----------------------------------------------------- 00:17:44.548 00:17:44.548 00:17:44.548 real 0m11.391s 00:17:44.548 user 0m11.820s 00:17:44.548 sys 0m0.702s 00:17:44.548 04:33:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:44.548 04:33:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:17:44.548 ************************************ 00:17:44.548 END TEST bdev_fio_rw_verify 00:17:44.548 ************************************ 00:17:44.548 04:33:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:17:44.548 04:33:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:44.548 04:33:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:17:44.548 04:33:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:44.548 04:33:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:17:44.548 04:33:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:17:44.548 04:33:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:17:44.548 04:33:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:17:44.548 04:33:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:44.548 04:33:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:17:44.548 04:33:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:17:44.548 04:33:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:44.548 04:33:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:17:44.548 04:33:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:17:44.548 04:33:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:17:44.548 04:33:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:17:44.548 04:33:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "969d5ea6-a968-409d-a241-f650436cccdb"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "969d5ea6-a968-409d-a241-f650436cccdb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "969d5ea6-a968-409d-a241-f650436cccdb",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "63a2ef31-1386-4ce0-afc7-104c131b9596",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "7acf5034-4720-4a00-a908-34b98fdbf107",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "63d7ca8f-b34d-4440-820c-6e963a6e5031",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:17:44.548 04:33:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:17:44.548 04:33:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:17:44.548 04:33:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:44.548 04:33:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:17:44.548 /home/vagrant/spdk_repo/spdk 00:17:44.548 04:33:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:17:44.548 04:33:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:17:44.548 00:17:44.548 real 0m11.680s 00:17:44.548 user 0m11.939s 00:17:44.548 sys 0m0.843s 00:17:44.548 04:33:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:44.548 04:33:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:44.548 ************************************ 00:17:44.548 END TEST bdev_fio 00:17:44.548 ************************************ 00:17:44.548 04:33:43 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:44.548 04:33:43 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:44.548 04:33:43 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:17:44.548 04:33:43 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:44.548 04:33:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:44.548 ************************************ 00:17:44.548 START TEST bdev_verify 00:17:44.548 ************************************ 00:17:44.548 04:33:43 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:44.548 [2024-12-13 04:33:43.708985] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:44.548 [2024-12-13 04:33:43.709183] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102508 ] 00:17:44.548 [2024-12-13 04:33:43.867647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:44.548 [2024-12-13 04:33:43.918006] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.548 [2024-12-13 04:33:43.918113] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:44.548 Running I/O for 5 seconds... 00:17:46.428 13782.00 IOPS, 53.84 MiB/s [2024-12-13T04:33:47.382Z] 12258.50 IOPS, 47.88 MiB/s [2024-12-13T04:33:48.323Z] 11750.67 IOPS, 45.90 MiB/s [2024-12-13T04:33:49.262Z] 11468.50 IOPS, 44.80 MiB/s [2024-12-13T04:33:49.262Z] 11341.20 IOPS, 44.30 MiB/s 00:17:49.247 Latency(us) 00:17:49.247 [2024-12-13T04:33:49.262Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.247 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:49.247 Verification LBA range: start 0x0 length 0x2000 00:17:49.247 raid5f : 5.02 6780.89 26.49 0.00 0.00 28383.58 216.43 20261.79 00:17:49.247 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:49.247 Verification LBA range: start 0x2000 length 0x2000 00:17:49.247 raid5f : 5.01 4556.47 17.80 0.00 0.00 42233.98 565.21 35486.74 00:17:49.247 [2024-12-13T04:33:49.262Z] =================================================================================================================== 00:17:49.247 [2024-12-13T04:33:49.262Z] Total : 11337.36 44.29 0.00 0.00 33946.39 216.43 35486.74 00:17:49.818 00:17:49.818 real 0m5.938s 00:17:49.818 user 0m10.942s 00:17:49.818 sys 0m0.338s 00:17:49.818 04:33:49 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:49.818 04:33:49 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:17:49.818 ************************************ 00:17:49.818 END TEST bdev_verify 00:17:49.818 ************************************ 00:17:49.818 04:33:49 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:49.818 04:33:49 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:17:49.818 04:33:49 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:49.818 04:33:49 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:49.818 ************************************ 00:17:49.818 START TEST bdev_verify_big_io 00:17:49.818 ************************************ 00:17:49.818 04:33:49 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:49.818 [2024-12-13 04:33:49.721829] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:49.818 [2024-12-13 04:33:49.722002] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102591 ] 00:17:50.078 [2024-12-13 04:33:49.880054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:50.078 [2024-12-13 04:33:49.929382] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.078 [2024-12-13 04:33:49.929503] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:50.339 Running I/O for 5 seconds... 00:17:52.673 633.00 IOPS, 39.56 MiB/s [2024-12-13T04:33:53.661Z] 760.00 IOPS, 47.50 MiB/s [2024-12-13T04:33:54.601Z] 739.67 IOPS, 46.23 MiB/s [2024-12-13T04:33:55.541Z] 761.50 IOPS, 47.59 MiB/s [2024-12-13T04:33:55.541Z] 786.60 IOPS, 49.16 MiB/s 00:17:55.526 Latency(us) 00:17:55.526 [2024-12-13T04:33:55.541Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.526 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:55.526 Verification LBA range: start 0x0 length 0x200 00:17:55.526 raid5f : 5.30 454.87 28.43 0.00 0.00 7043077.14 175.29 311367.55 00:17:55.526 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:55.526 Verification LBA range: start 0x200 length 0x200 00:17:55.526 raid5f : 5.27 337.64 21.10 0.00 0.00 9392914.16 429.28 408440.96 00:17:55.526 [2024-12-13T04:33:55.541Z] =================================================================================================================== 00:17:55.526 [2024-12-13T04:33:55.541Z] Total : 792.51 49.53 0.00 0.00 8040453.66 175.29 408440.96 00:17:56.096 00:17:56.096 real 0m6.225s 00:17:56.096 user 0m11.535s 00:17:56.096 sys 0m0.319s 00:17:56.096 04:33:55 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:56.096 04:33:55 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:17:56.096 ************************************ 00:17:56.096 END TEST bdev_verify_big_io 00:17:56.096 ************************************ 00:17:56.096 04:33:55 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:56.096 04:33:55 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:17:56.096 04:33:55 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:56.096 04:33:55 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:56.096 ************************************ 00:17:56.096 START TEST bdev_write_zeroes 00:17:56.096 ************************************ 00:17:56.096 04:33:55 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:56.096 [2024-12-13 04:33:56.014891] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:56.096 [2024-12-13 04:33:56.015084] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102678 ] 00:17:56.356 [2024-12-13 04:33:56.170653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.356 [2024-12-13 04:33:56.218656] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.616 Running I/O for 1 seconds... 00:17:57.557 28719.00 IOPS, 112.18 MiB/s 00:17:57.557 Latency(us) 00:17:57.557 [2024-12-13T04:33:57.572Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.557 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:57.557 raid5f : 1.01 28696.30 112.09 0.00 0.00 4447.29 1502.46 6038.47 00:17:57.557 [2024-12-13T04:33:57.572Z] =================================================================================================================== 00:17:57.557 [2024-12-13T04:33:57.572Z] Total : 28696.30 112.09 0.00 0.00 4447.29 1502.46 6038.47 00:17:58.128 00:17:58.128 real 0m1.907s 00:17:58.128 user 0m1.490s 00:17:58.128 sys 0m0.303s 00:17:58.128 ************************************ 00:17:58.128 END TEST bdev_write_zeroes 00:17:58.128 ************************************ 00:17:58.128 04:33:57 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:58.128 04:33:57 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:17:58.128 04:33:57 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:58.128 04:33:57 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:17:58.128 04:33:57 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:58.128 04:33:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:58.128 ************************************ 00:17:58.128 START TEST bdev_json_nonenclosed 00:17:58.128 ************************************ 00:17:58.128 04:33:57 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:58.128 [2024-12-13 04:33:58.003927] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:58.128 [2024-12-13 04:33:58.004134] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102719 ] 00:17:58.389 [2024-12-13 04:33:58.159357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.389 [2024-12-13 04:33:58.201856] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.389 [2024-12-13 04:33:58.202077] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:17:58.389 [2024-12-13 04:33:58.202165] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:58.389 [2024-12-13 04:33:58.202197] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:58.389 00:17:58.389 real 0m0.392s 00:17:58.389 user 0m0.162s 00:17:58.389 sys 0m0.126s 00:17:58.389 04:33:58 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:58.389 04:33:58 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:17:58.389 ************************************ 00:17:58.389 END TEST bdev_json_nonenclosed 00:17:58.389 ************************************ 00:17:58.389 04:33:58 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:58.389 04:33:58 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:17:58.389 04:33:58 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:58.389 04:33:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:58.389 ************************************ 00:17:58.389 START TEST bdev_json_nonarray 00:17:58.389 ************************************ 00:17:58.389 04:33:58 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:58.650 [2024-12-13 04:33:58.463734] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:58.650 [2024-12-13 04:33:58.463842] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102740 ] 00:17:58.650 [2024-12-13 04:33:58.620756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.911 [2024-12-13 04:33:58.669002] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.911 [2024-12-13 04:33:58.669147] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:17:58.911 [2024-12-13 04:33:58.669174] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:58.911 [2024-12-13 04:33:58.669191] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:58.911 00:17:58.911 real 0m0.396s 00:17:58.911 user 0m0.175s 00:17:58.911 sys 0m0.118s 00:17:58.911 04:33:58 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:58.911 ************************************ 00:17:58.911 END TEST bdev_json_nonarray 00:17:58.911 ************************************ 00:17:58.911 04:33:58 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:17:58.911 04:33:58 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:17:58.911 04:33:58 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:17:58.911 04:33:58 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:17:58.911 04:33:58 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:17:58.911 04:33:58 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:17:58.911 04:33:58 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:17:58.911 04:33:58 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:58.911 04:33:58 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:17:58.911 04:33:58 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:17:58.911 04:33:58 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:17:58.911 04:33:58 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:17:58.911 ************************************ 00:17:58.911 END TEST blockdev_raid5f 00:17:58.911 ************************************ 00:17:58.911 00:17:58.911 real 0m36.295s 00:17:58.911 user 0m48.992s 00:17:58.911 sys 0m5.274s 00:17:58.911 04:33:58 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:58.911 04:33:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:58.911 04:33:58 -- spdk/autotest.sh@194 -- # uname -s 00:17:58.911 04:33:58 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:17:58.911 04:33:58 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:17:58.911 04:33:58 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:17:58.911 04:33:58 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:17:58.911 04:33:58 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:17:58.911 04:33:58 -- spdk/autotest.sh@260 -- # timing_exit lib 00:17:58.911 04:33:58 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:58.911 04:33:58 -- common/autotest_common.sh@10 -- # set +x 00:17:59.172 04:33:58 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:17:59.172 04:33:58 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:17:59.172 04:33:58 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:17:59.172 04:33:58 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:59.172 04:33:58 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:59.172 04:33:58 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:17:59.172 04:33:58 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:17:59.172 04:33:58 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:17:59.172 04:33:58 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:17:59.172 04:33:58 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:17:59.172 04:33:58 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:17:59.172 04:33:58 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:17:59.172 04:33:58 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:17:59.172 04:33:58 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:17:59.172 04:33:58 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:17:59.172 04:33:58 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:17:59.172 04:33:58 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:17:59.172 04:33:58 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:17:59.172 04:33:58 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:17:59.172 04:33:58 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:17:59.172 04:33:58 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:59.172 04:33:58 -- common/autotest_common.sh@10 -- # set +x 00:17:59.172 04:33:58 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:17:59.172 04:33:58 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:17:59.172 04:33:58 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:17:59.172 04:33:58 -- common/autotest_common.sh@10 -- # set +x 00:18:01.718 INFO: APP EXITING 00:18:01.718 INFO: killing all VMs 00:18:01.718 INFO: killing vhost app 00:18:01.718 INFO: EXIT DONE 00:18:01.978 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:01.978 Waiting for block devices as requested 00:18:01.978 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:01.978 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:02.920 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:03.180 Cleaning 00:18:03.180 Removing: /var/run/dpdk/spdk0/config 00:18:03.180 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:18:03.180 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:18:03.180 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:18:03.180 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:18:03.180 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:18:03.180 Removing: /var/run/dpdk/spdk0/hugepage_info 00:18:03.180 Removing: /dev/shm/spdk_tgt_trace.pid70688 00:18:03.180 Removing: /var/run/dpdk/spdk0 00:18:03.180 Removing: /var/run/dpdk/spdk_pid100800 00:18:03.180 Removing: /var/run/dpdk/spdk_pid101119 00:18:03.180 Removing: /var/run/dpdk/spdk_pid101785 00:18:03.180 Removing: /var/run/dpdk/spdk_pid102043 00:18:03.180 Removing: /var/run/dpdk/spdk_pid102087 00:18:03.180 Removing: /var/run/dpdk/spdk_pid102113 00:18:03.180 Removing: /var/run/dpdk/spdk_pid102345 00:18:03.180 Removing: /var/run/dpdk/spdk_pid102508 00:18:03.180 Removing: /var/run/dpdk/spdk_pid102591 00:18:03.180 Removing: /var/run/dpdk/spdk_pid102678 00:18:03.180 Removing: /var/run/dpdk/spdk_pid102719 00:18:03.180 Removing: /var/run/dpdk/spdk_pid102740 00:18:03.180 Removing: /var/run/dpdk/spdk_pid70525 00:18:03.181 Removing: /var/run/dpdk/spdk_pid70688 00:18:03.181 Removing: /var/run/dpdk/spdk_pid70890 00:18:03.181 Removing: /var/run/dpdk/spdk_pid70983 00:18:03.181 Removing: /var/run/dpdk/spdk_pid71011 00:18:03.181 Removing: /var/run/dpdk/spdk_pid71123 00:18:03.181 Removing: /var/run/dpdk/spdk_pid71141 00:18:03.181 Removing: /var/run/dpdk/spdk_pid71329 00:18:03.181 Removing: /var/run/dpdk/spdk_pid71409 00:18:03.181 Removing: /var/run/dpdk/spdk_pid71494 00:18:03.181 Removing: /var/run/dpdk/spdk_pid71594 00:18:03.181 Removing: /var/run/dpdk/spdk_pid71680 00:18:03.181 Removing: /var/run/dpdk/spdk_pid71724 00:18:03.181 Removing: /var/run/dpdk/spdk_pid71756 00:18:03.181 Removing: /var/run/dpdk/spdk_pid71827 00:18:03.181 Removing: /var/run/dpdk/spdk_pid71939 00:18:03.181 Removing: /var/run/dpdk/spdk_pid72371 00:18:03.181 Removing: /var/run/dpdk/spdk_pid72418 00:18:03.181 Removing: /var/run/dpdk/spdk_pid72466 00:18:03.181 Removing: /var/run/dpdk/spdk_pid72482 00:18:03.181 Removing: /var/run/dpdk/spdk_pid72551 00:18:03.181 Removing: /var/run/dpdk/spdk_pid72567 00:18:03.181 Removing: /var/run/dpdk/spdk_pid72647 00:18:03.181 Removing: /var/run/dpdk/spdk_pid72663 00:18:03.181 Removing: /var/run/dpdk/spdk_pid72705 00:18:03.181 Removing: /var/run/dpdk/spdk_pid72723 00:18:03.181 Removing: /var/run/dpdk/spdk_pid72771 00:18:03.181 Removing: /var/run/dpdk/spdk_pid72785 00:18:03.181 Removing: /var/run/dpdk/spdk_pid72929 00:18:03.181 Removing: /var/run/dpdk/spdk_pid72965 00:18:03.181 Removing: /var/run/dpdk/spdk_pid73049 00:18:03.181 Removing: /var/run/dpdk/spdk_pid74243 00:18:03.441 Removing: /var/run/dpdk/spdk_pid74438 00:18:03.441 Removing: /var/run/dpdk/spdk_pid74573 00:18:03.441 Removing: /var/run/dpdk/spdk_pid75183 00:18:03.442 Removing: /var/run/dpdk/spdk_pid75378 00:18:03.442 Removing: /var/run/dpdk/spdk_pid75512 00:18:03.442 Removing: /var/run/dpdk/spdk_pid76117 00:18:03.442 Removing: /var/run/dpdk/spdk_pid76436 00:18:03.442 Removing: /var/run/dpdk/spdk_pid76571 00:18:03.442 Removing: /var/run/dpdk/spdk_pid77906 00:18:03.442 Removing: /var/run/dpdk/spdk_pid78148 00:18:03.442 Removing: /var/run/dpdk/spdk_pid78283 00:18:03.442 Removing: /var/run/dpdk/spdk_pid79625 00:18:03.442 Removing: /var/run/dpdk/spdk_pid79866 00:18:03.442 Removing: /var/run/dpdk/spdk_pid80000 00:18:03.442 Removing: /var/run/dpdk/spdk_pid81341 00:18:03.442 Removing: /var/run/dpdk/spdk_pid81776 00:18:03.442 Removing: /var/run/dpdk/spdk_pid81905 00:18:03.442 Removing: /var/run/dpdk/spdk_pid83335 00:18:03.442 Removing: /var/run/dpdk/spdk_pid83587 00:18:03.442 Removing: /var/run/dpdk/spdk_pid83723 00:18:03.442 Removing: /var/run/dpdk/spdk_pid85165 00:18:03.442 Removing: /var/run/dpdk/spdk_pid85413 00:18:03.442 Removing: /var/run/dpdk/spdk_pid85542 00:18:03.442 Removing: /var/run/dpdk/spdk_pid86978 00:18:03.442 Removing: /var/run/dpdk/spdk_pid87454 00:18:03.442 Removing: /var/run/dpdk/spdk_pid87583 00:18:03.442 Removing: /var/run/dpdk/spdk_pid87716 00:18:03.442 Removing: /var/run/dpdk/spdk_pid88130 00:18:03.442 Removing: /var/run/dpdk/spdk_pid88839 00:18:03.442 Removing: /var/run/dpdk/spdk_pid89205 00:18:03.442 Removing: /var/run/dpdk/spdk_pid89902 00:18:03.442 Removing: /var/run/dpdk/spdk_pid90331 00:18:03.442 Removing: /var/run/dpdk/spdk_pid91074 00:18:03.442 Removing: /var/run/dpdk/spdk_pid91472 00:18:03.442 Removing: /var/run/dpdk/spdk_pid93397 00:18:03.442 Removing: /var/run/dpdk/spdk_pid93830 00:18:03.442 Removing: /var/run/dpdk/spdk_pid94253 00:18:03.442 Removing: /var/run/dpdk/spdk_pid96298 00:18:03.442 Removing: /var/run/dpdk/spdk_pid96779 00:18:03.442 Removing: /var/run/dpdk/spdk_pid97284 00:18:03.442 Removing: /var/run/dpdk/spdk_pid98322 00:18:03.442 Removing: /var/run/dpdk/spdk_pid98639 00:18:03.442 Removing: /var/run/dpdk/spdk_pid99559 00:18:03.442 Removing: /var/run/dpdk/spdk_pid99882 00:18:03.442 Clean 00:18:03.703 04:34:03 -- common/autotest_common.sh@1453 -- # return 0 00:18:03.703 04:34:03 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:18:03.703 04:34:03 -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:03.703 04:34:03 -- common/autotest_common.sh@10 -- # set +x 00:18:03.703 04:34:03 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:18:03.703 04:34:03 -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:03.703 04:34:03 -- common/autotest_common.sh@10 -- # set +x 00:18:03.703 04:34:03 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:18:03.703 04:34:03 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:18:03.703 04:34:03 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:18:03.703 04:34:03 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:18:03.703 04:34:03 -- spdk/autotest.sh@398 -- # hostname 00:18:03.703 04:34:03 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:18:03.963 geninfo: WARNING: invalid characters removed from testname! 00:18:25.913 04:34:25 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:29.204 04:34:28 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:30.583 04:34:30 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:33.122 04:34:32 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:34.502 04:34:34 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:37.040 04:34:36 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:38.950 04:34:38 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:18:38.950 04:34:38 -- spdk/autorun.sh@1 -- $ timing_finish 00:18:38.950 04:34:38 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:18:38.950 04:34:38 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:18:38.950 04:34:38 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:18:38.950 04:34:38 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:18:38.950 + [[ -n 6165 ]] 00:18:38.950 + sudo kill 6165 00:18:38.961 [Pipeline] } 00:18:38.978 [Pipeline] // timeout 00:18:38.984 [Pipeline] } 00:18:39.000 [Pipeline] // stage 00:18:39.006 [Pipeline] } 00:18:39.022 [Pipeline] // catchError 00:18:39.033 [Pipeline] stage 00:18:39.035 [Pipeline] { (Stop VM) 00:18:39.046 [Pipeline] sh 00:18:39.335 + vagrant halt 00:18:41.926 ==> default: Halting domain... 00:18:50.079 [Pipeline] sh 00:18:50.365 + vagrant destroy -f 00:18:52.904 ==> default: Removing domain... 00:18:52.917 [Pipeline] sh 00:18:53.199 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:18:53.209 [Pipeline] } 00:18:53.224 [Pipeline] // stage 00:18:53.229 [Pipeline] } 00:18:53.243 [Pipeline] // dir 00:18:53.248 [Pipeline] } 00:18:53.263 [Pipeline] // wrap 00:18:53.269 [Pipeline] } 00:18:53.282 [Pipeline] // catchError 00:18:53.292 [Pipeline] stage 00:18:53.294 [Pipeline] { (Epilogue) 00:18:53.307 [Pipeline] sh 00:18:53.592 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:18:57.804 [Pipeline] catchError 00:18:57.806 [Pipeline] { 00:18:57.821 [Pipeline] sh 00:18:58.107 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:18:58.107 Artifacts sizes are good 00:18:58.118 [Pipeline] } 00:18:58.132 [Pipeline] // catchError 00:18:58.143 [Pipeline] archiveArtifacts 00:18:58.151 Archiving artifacts 00:18:58.259 [Pipeline] cleanWs 00:18:58.271 [WS-CLEANUP] Deleting project workspace... 00:18:58.271 [WS-CLEANUP] Deferred wipeout is used... 00:18:58.278 [WS-CLEANUP] done 00:18:58.280 [Pipeline] } 00:18:58.296 [Pipeline] // stage 00:18:58.301 [Pipeline] } 00:18:58.315 [Pipeline] // node 00:18:58.323 [Pipeline] End of Pipeline 00:18:58.368 Finished: SUCCESS